00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 593 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3259 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.039 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.054 Fetching changes from the remote Git repository 00:00:00.056 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.089 Using shallow fetch with depth 1 00:00:00.090 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.090 > git --version # timeout=10 00:00:00.130 > git --version # 'git version 2.39.2' 00:00:00.130 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.166 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.166 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.870 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.881 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.892 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:04.892 > git config core.sparsecheckout # timeout=10 00:00:04.903 > git read-tree -mu HEAD # timeout=10 00:00:04.919 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:04.939 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:04.939 > git rev-list --no-walk a4dfdc44df8d07f755780ce4c74effabd30d33d0 # timeout=10 00:00:05.052 [Pipeline] Start of Pipeline 00:00:05.067 [Pipeline] library 00:00:05.068 Loading library shm_lib@master 00:00:05.068 Library shm_lib@master is cached. Copying from home. 00:00:05.085 [Pipeline] node 00:00:05.092 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.094 [Pipeline] { 00:00:05.105 [Pipeline] catchError 00:00:05.107 [Pipeline] { 00:00:05.116 [Pipeline] wrap 00:00:05.124 [Pipeline] { 00:00:05.130 [Pipeline] stage 00:00:05.132 [Pipeline] { (Prologue) 00:00:05.291 [Pipeline] sh 00:00:05.570 + logger -p user.info -t JENKINS-CI 00:00:05.593 [Pipeline] echo 00:00:05.595 Node: WFP8 00:00:05.603 [Pipeline] sh 00:00:05.897 [Pipeline] setCustomBuildProperty 00:00:05.907 [Pipeline] echo 00:00:05.908 Cleanup processes 00:00:05.912 [Pipeline] sh 00:00:06.199 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.199 1314435 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.212 [Pipeline] sh 00:00:06.495 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.495 ++ grep -v 'sudo pgrep' 00:00:06.495 ++ awk '{print $1}' 00:00:06.495 + sudo kill -9 00:00:06.495 + true 00:00:06.514 [Pipeline] cleanWs 00:00:06.525 [WS-CLEANUP] Deleting project workspace... 00:00:06.525 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.531 [WS-CLEANUP] done 00:00:06.535 [Pipeline] setCustomBuildProperty 00:00:06.546 [Pipeline] sh 00:00:06.822 + sudo git config --global --replace-all safe.directory '*' 00:00:06.886 [Pipeline] httpRequest 00:00:06.916 [Pipeline] echo 00:00:06.917 Sorcerer 10.211.164.101 is alive 00:00:06.927 [Pipeline] httpRequest 00:00:06.930 HttpMethod: GET 00:00:06.931 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.931 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.955 Response Code: HTTP/1.1 200 OK 00:00:06.955 Success: Status code 200 is in the accepted range: 200,404 00:00:06.956 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:19.420 [Pipeline] sh 00:00:19.701 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:19.718 [Pipeline] httpRequest 00:00:19.742 [Pipeline] echo 00:00:19.743 Sorcerer 10.211.164.101 is alive 00:00:19.752 [Pipeline] httpRequest 00:00:19.756 HttpMethod: GET 00:00:19.757 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:19.757 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:19.781 Response Code: HTTP/1.1 200 OK 00:00:19.781 Success: Status code 200 is in the accepted range: 200,404 00:00:19.781 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:57.446 [Pipeline] sh 00:01:57.732 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:02:00.280 [Pipeline] sh 00:02:00.564 + git -C spdk log --oneline -n5 00:02:00.564 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:02:00.564 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:02:00.564 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:02:00.564 e03c164a1 nvme: add nvme_ctrlr_lock 00:02:00.564 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:02:00.585 [Pipeline] withCredentials 00:02:00.599 > git --version # timeout=10 00:02:00.609 > git --version # 'git version 2.39.2' 00:02:00.625 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:00.628 [Pipeline] { 00:02:00.637 [Pipeline] retry 00:02:00.639 [Pipeline] { 00:02:00.657 [Pipeline] sh 00:02:00.939 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:02:00.952 [Pipeline] } 00:02:00.976 [Pipeline] // retry 00:02:00.981 [Pipeline] } 00:02:01.004 [Pipeline] // withCredentials 00:02:01.015 [Pipeline] httpRequest 00:02:01.039 [Pipeline] echo 00:02:01.041 Sorcerer 10.211.164.101 is alive 00:02:01.050 [Pipeline] httpRequest 00:02:01.055 HttpMethod: GET 00:02:01.056 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:01.057 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:01.062 Response Code: HTTP/1.1 200 OK 00:02:01.063 Success: Status code 200 is in the accepted range: 200,404 00:02:01.063 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:41.093 [Pipeline] sh 00:02:41.376 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:42.801 [Pipeline] sh 00:02:43.084 + git -C dpdk log --oneline -n5 00:02:43.084 eeb0605f11 version: 23.11.0 00:02:43.084 238778122a doc: update release notes for 23.11 00:02:43.084 46aa6b3cfc doc: fix description of RSS features 00:02:43.084 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:43.084 7e421ae345 devtools: support skipping forbid rule check 00:02:43.096 [Pipeline] } 00:02:43.115 [Pipeline] // stage 00:02:43.124 [Pipeline] stage 00:02:43.126 [Pipeline] { (Prepare) 00:02:43.148 [Pipeline] writeFile 00:02:43.164 [Pipeline] sh 00:02:43.472 + logger -p user.info -t JENKINS-CI 00:02:43.484 [Pipeline] sh 00:02:43.766 + logger -p user.info -t JENKINS-CI 00:02:43.778 [Pipeline] sh 00:02:44.061 + cat autorun-spdk.conf 00:02:44.061 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.061 SPDK_TEST_NVMF=1 00:02:44.061 SPDK_TEST_NVME_CLI=1 00:02:44.061 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:44.061 SPDK_TEST_NVMF_NICS=e810 00:02:44.061 SPDK_TEST_VFIOUSER=1 00:02:44.061 SPDK_RUN_UBSAN=1 00:02:44.061 NET_TYPE=phy 00:02:44.061 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:44.061 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:44.068 RUN_NIGHTLY=1 00:02:44.073 [Pipeline] readFile 00:02:44.098 [Pipeline] withEnv 00:02:44.100 [Pipeline] { 00:02:44.111 [Pipeline] sh 00:02:44.394 + set -ex 00:02:44.394 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:44.394 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:44.394 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.394 ++ SPDK_TEST_NVMF=1 00:02:44.394 ++ SPDK_TEST_NVME_CLI=1 00:02:44.394 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:44.394 ++ SPDK_TEST_NVMF_NICS=e810 00:02:44.394 ++ SPDK_TEST_VFIOUSER=1 00:02:44.394 ++ SPDK_RUN_UBSAN=1 00:02:44.394 ++ NET_TYPE=phy 00:02:44.394 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:44.394 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:44.394 ++ RUN_NIGHTLY=1 00:02:44.394 + case $SPDK_TEST_NVMF_NICS in 00:02:44.394 + DRIVERS=ice 00:02:44.394 + [[ tcp == \r\d\m\a ]] 00:02:44.394 + [[ -n ice ]] 00:02:44.394 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:44.394 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:44.394 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:44.394 rmmod: ERROR: Module irdma is not currently loaded 00:02:44.394 rmmod: ERROR: Module i40iw is not currently loaded 00:02:44.394 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:44.394 + true 00:02:44.394 + for D in $DRIVERS 00:02:44.394 + sudo modprobe ice 00:02:44.394 + exit 0 00:02:44.403 [Pipeline] } 00:02:44.417 [Pipeline] // withEnv 00:02:44.422 [Pipeline] } 00:02:44.434 [Pipeline] // stage 00:02:44.443 [Pipeline] catchError 00:02:44.445 [Pipeline] { 00:02:44.457 [Pipeline] timeout 00:02:44.457 Timeout set to expire in 50 min 00:02:44.458 [Pipeline] { 00:02:44.468 [Pipeline] stage 00:02:44.469 [Pipeline] { (Tests) 00:02:44.479 [Pipeline] sh 00:02:44.757 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:44.757 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:44.757 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:44.757 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:44.757 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.757 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:44.757 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:44.757 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:44.757 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:44.757 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:44.757 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:44.757 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:44.757 + source /etc/os-release 00:02:44.757 ++ NAME='Fedora Linux' 00:02:44.757 ++ VERSION='38 (Cloud Edition)' 00:02:44.757 ++ ID=fedora 00:02:44.757 ++ VERSION_ID=38 00:02:44.757 ++ VERSION_CODENAME= 00:02:44.757 ++ PLATFORM_ID=platform:f38 00:02:44.757 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:44.757 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:44.757 ++ LOGO=fedora-logo-icon 00:02:44.757 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:44.757 ++ HOME_URL=https://fedoraproject.org/ 00:02:44.757 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:44.757 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:44.757 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:44.757 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:44.757 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:44.757 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:44.757 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:44.757 ++ SUPPORT_END=2024-05-14 00:02:44.757 ++ VARIANT='Cloud Edition' 00:02:44.757 ++ VARIANT_ID=cloud 00:02:44.757 + uname -a 00:02:44.757 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:44.757 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:47.293 Hugepages 00:02:47.293 node hugesize free / total 00:02:47.293 node0 1048576kB 0 / 0 00:02:47.293 node0 2048kB 0 / 0 00:02:47.293 node1 1048576kB 0 / 0 00:02:47.293 node1 2048kB 0 / 0 00:02:47.293 00:02:47.293 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:47.293 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:47.293 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:47.293 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:47.293 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:47.293 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:47.293 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:47.293 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:47.293 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:47.293 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:47.293 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:47.293 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:47.293 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:47.293 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:47.293 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:47.293 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:47.293 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:47.293 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:47.293 + rm -f /tmp/spdk-ld-path 00:02:47.293 + source autorun-spdk.conf 00:02:47.293 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.293 ++ SPDK_TEST_NVMF=1 00:02:47.293 ++ SPDK_TEST_NVME_CLI=1 00:02:47.293 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:47.293 ++ SPDK_TEST_NVMF_NICS=e810 00:02:47.293 ++ SPDK_TEST_VFIOUSER=1 00:02:47.293 ++ SPDK_RUN_UBSAN=1 00:02:47.293 ++ NET_TYPE=phy 00:02:47.293 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:47.293 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:47.293 ++ RUN_NIGHTLY=1 00:02:47.293 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:47.293 + [[ -n '' ]] 00:02:47.293 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.293 + for M in /var/spdk/build-*-manifest.txt 00:02:47.293 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:47.293 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:47.293 + for M in /var/spdk/build-*-manifest.txt 00:02:47.293 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:47.293 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:47.293 ++ uname 00:02:47.293 + [[ Linux == \L\i\n\u\x ]] 00:02:47.293 + sudo dmesg -T 00:02:47.293 + sudo dmesg --clear 00:02:47.293 + dmesg_pid=1315605 00:02:47.293 + [[ Fedora Linux == FreeBSD ]] 00:02:47.293 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:47.293 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:47.293 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:47.293 + [[ -x /usr/src/fio-static/fio ]] 00:02:47.293 + export FIO_BIN=/usr/src/fio-static/fio 00:02:47.293 + FIO_BIN=/usr/src/fio-static/fio 00:02:47.293 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:47.293 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:47.293 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:47.293 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:47.293 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:47.293 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:47.293 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:47.293 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:47.293 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:47.293 + sudo dmesg -Tw 00:02:47.293 Test configuration: 00:02:47.293 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:47.293 SPDK_TEST_NVMF=1 00:02:47.293 SPDK_TEST_NVME_CLI=1 00:02:47.293 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:47.293 SPDK_TEST_NVMF_NICS=e810 00:02:47.293 SPDK_TEST_VFIOUSER=1 00:02:47.293 SPDK_RUN_UBSAN=1 00:02:47.293 NET_TYPE=phy 00:02:47.293 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:47.293 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:47.293 RUN_NIGHTLY=1 13:32:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:47.293 13:32:49 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:47.293 13:32:49 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.293 13:32:49 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.293 13:32:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.293 13:32:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.293 13:32:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.293 13:32:49 -- paths/export.sh@5 -- $ export PATH 00:02:47.293 13:32:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.293 13:32:49 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:47.293 13:32:49 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:47.294 13:32:49 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720697569.XXXXXX 00:02:47.294 13:32:49 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720697569.lD9UWN 00:02:47.294 13:32:49 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:47.294 13:32:49 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:02:47.294 13:32:49 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:47.294 13:32:49 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:47.294 13:32:49 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:47.294 13:32:49 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:47.294 13:32:49 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:47.294 13:32:49 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:47.294 13:32:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.294 13:32:49 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:47.294 13:32:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:47.294 13:32:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:47.294 13:32:49 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.294 13:32:49 -- spdk/autobuild.sh@16 -- $ date -u 00:02:47.294 Thu Jul 11 11:32:49 AM UTC 2024 00:02:47.294 13:32:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:47.294 LTS-59-g4b94202c6 00:02:47.294 13:32:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:47.294 13:32:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:47.294 13:32:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:47.294 13:32:49 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:47.294 13:32:49 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:47.294 13:32:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.294 ************************************ 00:02:47.294 START TEST ubsan 00:02:47.294 ************************************ 00:02:47.294 13:32:49 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:47.294 using ubsan 00:02:47.294 00:02:47.294 real 0m0.000s 00:02:47.294 user 0m0.000s 00:02:47.294 sys 0m0.000s 00:02:47.294 13:32:49 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:47.294 13:32:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.294 ************************************ 00:02:47.294 END TEST ubsan 00:02:47.294 ************************************ 00:02:47.294 13:32:49 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:47.294 13:32:49 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:47.294 13:32:49 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:47.294 13:32:49 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:47.294 13:32:49 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:47.294 13:32:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.294 ************************************ 00:02:47.294 START TEST build_native_dpdk 00:02:47.294 ************************************ 00:02:47.294 13:32:49 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:47.294 13:32:49 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:47.294 13:32:49 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:47.294 13:32:49 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:47.294 13:32:49 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:47.294 13:32:49 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:47.294 13:32:49 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:47.294 13:32:49 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:47.294 13:32:49 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:47.294 13:32:49 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:47.294 13:32:49 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:47.294 13:32:49 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:47.294 13:32:49 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:47.294 13:32:49 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:47.294 13:32:49 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:47.294 13:32:49 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:47.294 13:32:49 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:47.294 13:32:49 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:47.294 13:32:49 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:47.294 13:32:49 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.294 13:32:49 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:47.294 eeb0605f11 version: 23.11.0 00:02:47.294 238778122a doc: update release notes for 23.11 00:02:47.294 46aa6b3cfc doc: fix description of RSS features 00:02:47.294 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:47.294 7e421ae345 devtools: support skipping forbid rule check 00:02:47.294 13:32:49 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:47.294 13:32:49 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:47.294 13:32:49 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:47.294 13:32:49 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:47.294 13:32:49 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:47.294 13:32:49 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:47.294 13:32:49 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:47.294 13:32:49 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:47.294 13:32:49 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:47.294 13:32:49 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:47.294 13:32:49 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:47.294 13:32:49 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:47.294 13:32:49 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:47.294 13:32:49 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:47.294 13:32:49 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:47.294 13:32:49 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:47.294 13:32:49 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:47.294 13:32:49 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:47.294 13:32:49 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:47.294 13:32:49 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:47.294 13:32:49 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:47.294 13:32:49 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:47.294 13:32:49 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:47.294 13:32:49 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:47.294 13:32:49 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:47.294 13:32:49 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:47.294 13:32:49 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:47.294 13:32:49 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:47.294 13:32:49 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:47.294 13:32:49 -- scripts/common.sh@343 -- $ case "$op" in 00:02:47.294 13:32:49 -- scripts/common.sh@344 -- $ : 1 00:02:47.294 13:32:49 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:47.294 13:32:49 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:47.294 13:32:49 -- scripts/common.sh@364 -- $ decimal 23 00:02:47.294 13:32:49 -- scripts/common.sh@352 -- $ local d=23 00:02:47.294 13:32:49 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:47.294 13:32:49 -- scripts/common.sh@354 -- $ echo 23 00:02:47.294 13:32:49 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:47.294 13:32:49 -- scripts/common.sh@365 -- $ decimal 21 00:02:47.294 13:32:49 -- scripts/common.sh@352 -- $ local d=21 00:02:47.294 13:32:49 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:47.294 13:32:49 -- scripts/common.sh@354 -- $ echo 21 00:02:47.294 13:32:49 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:47.294 13:32:49 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:47.294 13:32:49 -- scripts/common.sh@366 -- $ return 1 00:02:47.294 13:32:49 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:47.294 patching file config/rte_config.h 00:02:47.294 Hunk #1 succeeded at 60 (offset 1 line). 00:02:47.552 13:32:49 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:47.552 13:32:49 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:47.552 13:32:49 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:47.552 13:32:49 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:47.552 13:32:49 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:51.740 The Meson build system 00:02:51.740 Version: 1.3.1 00:02:51.740 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:51.740 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:51.740 Build type: native build 00:02:51.740 Program cat found: YES (/usr/bin/cat) 00:02:51.740 Project name: DPDK 00:02:51.740 Project version: 23.11.0 00:02:51.740 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:51.740 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:51.740 Host machine cpu family: x86_64 00:02:51.740 Host machine cpu: x86_64 00:02:51.740 Message: ## Building in Developer Mode ## 00:02:51.740 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:51.740 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:51.740 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:51.740 Program python3 found: YES (/usr/bin/python3) 00:02:51.740 Program cat found: YES (/usr/bin/cat) 00:02:51.740 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:51.740 Compiler for C supports arguments -march=native: YES 00:02:51.740 Checking for size of "void *" : 8 00:02:51.740 Checking for size of "void *" : 8 (cached) 00:02:51.740 Library m found: YES 00:02:51.740 Library numa found: YES 00:02:51.740 Has header "numaif.h" : YES 00:02:51.740 Library fdt found: NO 00:02:51.740 Library execinfo found: NO 00:02:51.740 Has header "execinfo.h" : YES 00:02:51.740 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:51.740 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:51.740 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:51.740 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:51.740 Run-time dependency openssl found: YES 3.0.9 00:02:51.740 Run-time dependency libpcap found: YES 1.10.4 00:02:51.740 Has header "pcap.h" with dependency libpcap: YES 00:02:51.740 Compiler for C supports arguments -Wcast-qual: YES 00:02:51.740 Compiler for C supports arguments -Wdeprecated: YES 00:02:51.740 Compiler for C supports arguments -Wformat: YES 00:02:51.740 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:51.740 Compiler for C supports arguments -Wformat-security: NO 00:02:51.740 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:51.740 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:51.740 Compiler for C supports arguments -Wnested-externs: YES 00:02:51.740 Compiler for C supports arguments -Wold-style-definition: YES 00:02:51.740 Compiler for C supports arguments -Wpointer-arith: YES 00:02:51.740 Compiler for C supports arguments -Wsign-compare: YES 00:02:51.740 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:51.740 Compiler for C supports arguments -Wundef: YES 00:02:51.740 Compiler for C supports arguments -Wwrite-strings: YES 00:02:51.740 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:51.740 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:51.740 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:51.740 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:51.740 Program objdump found: YES (/usr/bin/objdump) 00:02:51.740 Compiler for C supports arguments -mavx512f: YES 00:02:51.740 Checking if "AVX512 checking" compiles: YES 00:02:51.740 Fetching value of define "__SSE4_2__" : 1 00:02:51.740 Fetching value of define "__AES__" : 1 00:02:51.740 Fetching value of define "__AVX__" : 1 00:02:51.740 Fetching value of define "__AVX2__" : 1 00:02:51.740 Fetching value of define "__AVX512BW__" : 1 00:02:51.740 Fetching value of define "__AVX512CD__" : 1 00:02:51.740 Fetching value of define "__AVX512DQ__" : 1 00:02:51.740 Fetching value of define "__AVX512F__" : 1 00:02:51.740 Fetching value of define "__AVX512VL__" : 1 00:02:51.740 Fetching value of define "__PCLMUL__" : 1 00:02:51.740 Fetching value of define "__RDRND__" : 1 00:02:51.741 Fetching value of define "__RDSEED__" : 1 00:02:51.741 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:51.741 Fetching value of define "__znver1__" : (undefined) 00:02:51.741 Fetching value of define "__znver2__" : (undefined) 00:02:51.741 Fetching value of define "__znver3__" : (undefined) 00:02:51.741 Fetching value of define "__znver4__" : (undefined) 00:02:51.741 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:51.741 Message: lib/log: Defining dependency "log" 00:02:51.741 Message: lib/kvargs: Defining dependency "kvargs" 00:02:51.741 Message: lib/telemetry: Defining dependency "telemetry" 00:02:51.741 Checking for function "getentropy" : NO 00:02:51.741 Message: lib/eal: Defining dependency "eal" 00:02:51.741 Message: lib/ring: Defining dependency "ring" 00:02:51.741 Message: lib/rcu: Defining dependency "rcu" 00:02:51.741 Message: lib/mempool: Defining dependency "mempool" 00:02:51.741 Message: lib/mbuf: Defining dependency "mbuf" 00:02:51.741 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:51.741 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:51.741 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:51.741 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:51.741 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:51.741 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:51.741 Compiler for C supports arguments -mpclmul: YES 00:02:51.741 Compiler for C supports arguments -maes: YES 00:02:51.741 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:51.741 Compiler for C supports arguments -mavx512bw: YES 00:02:51.741 Compiler for C supports arguments -mavx512dq: YES 00:02:51.741 Compiler for C supports arguments -mavx512vl: YES 00:02:51.741 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:51.741 Compiler for C supports arguments -mavx2: YES 00:02:51.741 Compiler for C supports arguments -mavx: YES 00:02:51.741 Message: lib/net: Defining dependency "net" 00:02:51.741 Message: lib/meter: Defining dependency "meter" 00:02:51.741 Message: lib/ethdev: Defining dependency "ethdev" 00:02:51.741 Message: lib/pci: Defining dependency "pci" 00:02:51.741 Message: lib/cmdline: Defining dependency "cmdline" 00:02:51.741 Message: lib/metrics: Defining dependency "metrics" 00:02:51.741 Message: lib/hash: Defining dependency "hash" 00:02:51.741 Message: lib/timer: Defining dependency "timer" 00:02:51.741 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:51.741 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:51.741 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:51.741 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:51.741 Message: lib/acl: Defining dependency "acl" 00:02:51.741 Message: lib/bbdev: Defining dependency "bbdev" 00:02:51.741 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:51.741 Run-time dependency libelf found: YES 0.190 00:02:51.741 Message: lib/bpf: Defining dependency "bpf" 00:02:51.741 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:51.741 Message: lib/compressdev: Defining dependency "compressdev" 00:02:51.741 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:51.741 Message: lib/distributor: Defining dependency "distributor" 00:02:51.741 Message: lib/dmadev: Defining dependency "dmadev" 00:02:51.741 Message: lib/efd: Defining dependency "efd" 00:02:51.741 Message: lib/eventdev: Defining dependency "eventdev" 00:02:51.741 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:51.741 Message: lib/gpudev: Defining dependency "gpudev" 00:02:51.741 Message: lib/gro: Defining dependency "gro" 00:02:51.741 Message: lib/gso: Defining dependency "gso" 00:02:51.741 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:51.741 Message: lib/jobstats: Defining dependency "jobstats" 00:02:51.741 Message: lib/latencystats: Defining dependency "latencystats" 00:02:51.741 Message: lib/lpm: Defining dependency "lpm" 00:02:51.741 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:51.741 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:51.741 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:51.741 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:51.741 Message: lib/member: Defining dependency "member" 00:02:51.741 Message: lib/pcapng: Defining dependency "pcapng" 00:02:51.741 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:51.741 Message: lib/power: Defining dependency "power" 00:02:51.741 Message: lib/rawdev: Defining dependency "rawdev" 00:02:51.741 Message: lib/regexdev: Defining dependency "regexdev" 00:02:51.741 Message: lib/mldev: Defining dependency "mldev" 00:02:51.741 Message: lib/rib: Defining dependency "rib" 00:02:51.741 Message: lib/reorder: Defining dependency "reorder" 00:02:51.741 Message: lib/sched: Defining dependency "sched" 00:02:51.741 Message: lib/security: Defining dependency "security" 00:02:51.741 Message: lib/stack: Defining dependency "stack" 00:02:51.741 Has header "linux/userfaultfd.h" : YES 00:02:51.741 Has header "linux/vduse.h" : YES 00:02:51.741 Message: lib/vhost: Defining dependency "vhost" 00:02:51.741 Message: lib/ipsec: Defining dependency "ipsec" 00:02:51.741 Message: lib/pdcp: Defining dependency "pdcp" 00:02:51.741 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:51.741 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:51.741 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:51.741 Message: lib/fib: Defining dependency "fib" 00:02:51.741 Message: lib/port: Defining dependency "port" 00:02:51.741 Message: lib/pdump: Defining dependency "pdump" 00:02:51.741 Message: lib/table: Defining dependency "table" 00:02:51.741 Message: lib/pipeline: Defining dependency "pipeline" 00:02:51.741 Message: lib/graph: Defining dependency "graph" 00:02:51.741 Message: lib/node: Defining dependency "node" 00:02:51.741 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:52.685 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:52.685 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:52.685 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:52.685 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:52.685 Compiler for C supports arguments -Wno-unused-value: YES 00:02:52.685 Compiler for C supports arguments -Wno-format: YES 00:02:52.685 Compiler for C supports arguments -Wno-format-security: YES 00:02:52.685 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:52.685 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:52.685 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:52.685 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:52.685 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.685 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:52.685 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.685 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:52.685 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:52.685 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:52.685 Has header "sys/epoll.h" : YES 00:02:52.685 Program doxygen found: YES (/usr/bin/doxygen) 00:02:52.685 Configuring doxy-api-html.conf using configuration 00:02:52.685 Configuring doxy-api-man.conf using configuration 00:02:52.685 Program mandb found: YES (/usr/bin/mandb) 00:02:52.685 Program sphinx-build found: NO 00:02:52.685 Configuring rte_build_config.h using configuration 00:02:52.685 Message: 00:02:52.685 ================= 00:02:52.685 Applications Enabled 00:02:52.685 ================= 00:02:52.685 00:02:52.685 apps: 00:02:52.685 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:52.685 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:52.685 test-pmd, test-regex, test-sad, test-security-perf, 00:02:52.685 00:02:52.685 Message: 00:02:52.685 ================= 00:02:52.685 Libraries Enabled 00:02:52.685 ================= 00:02:52.685 00:02:52.685 libs: 00:02:52.685 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:52.685 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:52.685 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:52.685 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:52.685 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:52.685 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:52.685 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:52.685 00:02:52.685 00:02:52.685 Message: 00:02:52.685 =============== 00:02:52.685 Drivers Enabled 00:02:52.685 =============== 00:02:52.685 00:02:52.685 common: 00:02:52.685 00:02:52.685 bus: 00:02:52.685 pci, vdev, 00:02:52.685 mempool: 00:02:52.685 ring, 00:02:52.685 dma: 00:02:52.685 00:02:52.685 net: 00:02:52.685 i40e, 00:02:52.685 raw: 00:02:52.685 00:02:52.685 crypto: 00:02:52.685 00:02:52.685 compress: 00:02:52.685 00:02:52.685 regex: 00:02:52.685 00:02:52.685 ml: 00:02:52.685 00:02:52.685 vdpa: 00:02:52.685 00:02:52.685 event: 00:02:52.685 00:02:52.685 baseband: 00:02:52.685 00:02:52.685 gpu: 00:02:52.685 00:02:52.685 00:02:52.685 Message: 00:02:52.685 ================= 00:02:52.685 Content Skipped 00:02:52.685 ================= 00:02:52.685 00:02:52.685 apps: 00:02:52.685 00:02:52.685 libs: 00:02:52.685 00:02:52.685 drivers: 00:02:52.685 common/cpt: not in enabled drivers build config 00:02:52.685 common/dpaax: not in enabled drivers build config 00:02:52.685 common/iavf: not in enabled drivers build config 00:02:52.685 common/idpf: not in enabled drivers build config 00:02:52.685 common/mvep: not in enabled drivers build config 00:02:52.685 common/octeontx: not in enabled drivers build config 00:02:52.685 bus/auxiliary: not in enabled drivers build config 00:02:52.685 bus/cdx: not in enabled drivers build config 00:02:52.685 bus/dpaa: not in enabled drivers build config 00:02:52.685 bus/fslmc: not in enabled drivers build config 00:02:52.685 bus/ifpga: not in enabled drivers build config 00:02:52.685 bus/platform: not in enabled drivers build config 00:02:52.685 bus/vmbus: not in enabled drivers build config 00:02:52.685 common/cnxk: not in enabled drivers build config 00:02:52.685 common/mlx5: not in enabled drivers build config 00:02:52.685 common/nfp: not in enabled drivers build config 00:02:52.685 common/qat: not in enabled drivers build config 00:02:52.685 common/sfc_efx: not in enabled drivers build config 00:02:52.685 mempool/bucket: not in enabled drivers build config 00:02:52.685 mempool/cnxk: not in enabled drivers build config 00:02:52.685 mempool/dpaa: not in enabled drivers build config 00:02:52.685 mempool/dpaa2: not in enabled drivers build config 00:02:52.685 mempool/octeontx: not in enabled drivers build config 00:02:52.685 mempool/stack: not in enabled drivers build config 00:02:52.685 dma/cnxk: not in enabled drivers build config 00:02:52.685 dma/dpaa: not in enabled drivers build config 00:02:52.685 dma/dpaa2: not in enabled drivers build config 00:02:52.685 dma/hisilicon: not in enabled drivers build config 00:02:52.685 dma/idxd: not in enabled drivers build config 00:02:52.685 dma/ioat: not in enabled drivers build config 00:02:52.685 dma/skeleton: not in enabled drivers build config 00:02:52.685 net/af_packet: not in enabled drivers build config 00:02:52.685 net/af_xdp: not in enabled drivers build config 00:02:52.685 net/ark: not in enabled drivers build config 00:02:52.685 net/atlantic: not in enabled drivers build config 00:02:52.685 net/avp: not in enabled drivers build config 00:02:52.685 net/axgbe: not in enabled drivers build config 00:02:52.685 net/bnx2x: not in enabled drivers build config 00:02:52.685 net/bnxt: not in enabled drivers build config 00:02:52.685 net/bonding: not in enabled drivers build config 00:02:52.685 net/cnxk: not in enabled drivers build config 00:02:52.685 net/cpfl: not in enabled drivers build config 00:02:52.685 net/cxgbe: not in enabled drivers build config 00:02:52.685 net/dpaa: not in enabled drivers build config 00:02:52.685 net/dpaa2: not in enabled drivers build config 00:02:52.685 net/e1000: not in enabled drivers build config 00:02:52.685 net/ena: not in enabled drivers build config 00:02:52.685 net/enetc: not in enabled drivers build config 00:02:52.685 net/enetfec: not in enabled drivers build config 00:02:52.685 net/enic: not in enabled drivers build config 00:02:52.685 net/failsafe: not in enabled drivers build config 00:02:52.685 net/fm10k: not in enabled drivers build config 00:02:52.685 net/gve: not in enabled drivers build config 00:02:52.685 net/hinic: not in enabled drivers build config 00:02:52.685 net/hns3: not in enabled drivers build config 00:02:52.685 net/iavf: not in enabled drivers build config 00:02:52.685 net/ice: not in enabled drivers build config 00:02:52.685 net/idpf: not in enabled drivers build config 00:02:52.685 net/igc: not in enabled drivers build config 00:02:52.685 net/ionic: not in enabled drivers build config 00:02:52.685 net/ipn3ke: not in enabled drivers build config 00:02:52.685 net/ixgbe: not in enabled drivers build config 00:02:52.685 net/mana: not in enabled drivers build config 00:02:52.685 net/memif: not in enabled drivers build config 00:02:52.685 net/mlx4: not in enabled drivers build config 00:02:52.685 net/mlx5: not in enabled drivers build config 00:02:52.685 net/mvneta: not in enabled drivers build config 00:02:52.685 net/mvpp2: not in enabled drivers build config 00:02:52.685 net/netvsc: not in enabled drivers build config 00:02:52.685 net/nfb: not in enabled drivers build config 00:02:52.685 net/nfp: not in enabled drivers build config 00:02:52.685 net/ngbe: not in enabled drivers build config 00:02:52.685 net/null: not in enabled drivers build config 00:02:52.685 net/octeontx: not in enabled drivers build config 00:02:52.685 net/octeon_ep: not in enabled drivers build config 00:02:52.685 net/pcap: not in enabled drivers build config 00:02:52.685 net/pfe: not in enabled drivers build config 00:02:52.685 net/qede: not in enabled drivers build config 00:02:52.685 net/ring: not in enabled drivers build config 00:02:52.685 net/sfc: not in enabled drivers build config 00:02:52.685 net/softnic: not in enabled drivers build config 00:02:52.685 net/tap: not in enabled drivers build config 00:02:52.685 net/thunderx: not in enabled drivers build config 00:02:52.685 net/txgbe: not in enabled drivers build config 00:02:52.686 net/vdev_netvsc: not in enabled drivers build config 00:02:52.686 net/vhost: not in enabled drivers build config 00:02:52.686 net/virtio: not in enabled drivers build config 00:02:52.686 net/vmxnet3: not in enabled drivers build config 00:02:52.686 raw/cnxk_bphy: not in enabled drivers build config 00:02:52.686 raw/cnxk_gpio: not in enabled drivers build config 00:02:52.686 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:52.686 raw/ifpga: not in enabled drivers build config 00:02:52.686 raw/ntb: not in enabled drivers build config 00:02:52.686 raw/skeleton: not in enabled drivers build config 00:02:52.686 crypto/armv8: not in enabled drivers build config 00:02:52.686 crypto/bcmfs: not in enabled drivers build config 00:02:52.686 crypto/caam_jr: not in enabled drivers build config 00:02:52.686 crypto/ccp: not in enabled drivers build config 00:02:52.686 crypto/cnxk: not in enabled drivers build config 00:02:52.686 crypto/dpaa_sec: not in enabled drivers build config 00:02:52.686 crypto/dpaa2_sec: not in enabled drivers build config 00:02:52.686 crypto/ipsec_mb: not in enabled drivers build config 00:02:52.686 crypto/mlx5: not in enabled drivers build config 00:02:52.686 crypto/mvsam: not in enabled drivers build config 00:02:52.686 crypto/nitrox: not in enabled drivers build config 00:02:52.686 crypto/null: not in enabled drivers build config 00:02:52.686 crypto/octeontx: not in enabled drivers build config 00:02:52.686 crypto/openssl: not in enabled drivers build config 00:02:52.686 crypto/scheduler: not in enabled drivers build config 00:02:52.686 crypto/uadk: not in enabled drivers build config 00:02:52.686 crypto/virtio: not in enabled drivers build config 00:02:52.686 compress/isal: not in enabled drivers build config 00:02:52.686 compress/mlx5: not in enabled drivers build config 00:02:52.686 compress/octeontx: not in enabled drivers build config 00:02:52.686 compress/zlib: not in enabled drivers build config 00:02:52.686 regex/mlx5: not in enabled drivers build config 00:02:52.686 regex/cn9k: not in enabled drivers build config 00:02:52.686 ml/cnxk: not in enabled drivers build config 00:02:52.686 vdpa/ifc: not in enabled drivers build config 00:02:52.686 vdpa/mlx5: not in enabled drivers build config 00:02:52.686 vdpa/nfp: not in enabled drivers build config 00:02:52.686 vdpa/sfc: not in enabled drivers build config 00:02:52.686 event/cnxk: not in enabled drivers build config 00:02:52.686 event/dlb2: not in enabled drivers build config 00:02:52.686 event/dpaa: not in enabled drivers build config 00:02:52.686 event/dpaa2: not in enabled drivers build config 00:02:52.686 event/dsw: not in enabled drivers build config 00:02:52.686 event/opdl: not in enabled drivers build config 00:02:52.686 event/skeleton: not in enabled drivers build config 00:02:52.686 event/sw: not in enabled drivers build config 00:02:52.686 event/octeontx: not in enabled drivers build config 00:02:52.686 baseband/acc: not in enabled drivers build config 00:02:52.686 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:52.686 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:52.686 baseband/la12xx: not in enabled drivers build config 00:02:52.686 baseband/null: not in enabled drivers build config 00:02:52.686 baseband/turbo_sw: not in enabled drivers build config 00:02:52.686 gpu/cuda: not in enabled drivers build config 00:02:52.686 00:02:52.686 00:02:52.686 Build targets in project: 217 00:02:52.686 00:02:52.686 DPDK 23.11.0 00:02:52.686 00:02:52.686 User defined options 00:02:52.686 libdir : lib 00:02:52.686 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:52.686 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:52.686 c_link_args : 00:02:52.686 enable_docs : false 00:02:52.686 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:52.686 enable_kmods : false 00:02:52.686 machine : native 00:02:52.686 tests : false 00:02:52.686 00:02:52.686 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.686 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:52.686 13:32:54 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:02:52.686 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:52.686 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:52.686 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:52.686 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:52.686 [4/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.686 [5/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:52.686 [6/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:52.686 [7/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.686 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:52.686 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:52.686 [10/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.686 [11/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:52.686 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:52.686 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:52.945 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:52.945 [15/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:52.945 [16/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:52.945 [17/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:52.945 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:52.945 [19/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:52.945 [20/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:52.945 [21/707] Linking static target lib/librte_kvargs.a 00:02:52.945 [22/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:52.945 [23/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:52.945 [24/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:52.945 [25/707] Linking static target lib/librte_pci.a 00:02:52.945 [26/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:52.945 [27/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:52.945 [28/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:52.945 [29/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:52.945 [30/707] Linking static target lib/librte_log.a 00:02:52.945 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:52.945 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:52.945 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:52.945 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:52.945 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:53.204 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:53.204 [37/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.204 [38/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:53.204 [39/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.205 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:53.205 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.205 [42/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:53.205 [43/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:53.205 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:53.205 [45/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:53.205 [46/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:53.469 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:53.469 [48/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:53.469 [49/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:53.469 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:53.469 [51/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:53.469 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:53.469 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:53.469 [54/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:53.469 [55/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:53.469 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:53.469 [57/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:53.469 [58/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:53.469 [59/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:53.469 [60/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:53.469 [61/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:53.469 [62/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:53.469 [63/707] Linking static target lib/librte_meter.a 00:02:53.469 [64/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:53.469 [65/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:53.469 [66/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:53.469 [67/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:53.469 [68/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:53.469 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:53.469 [70/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:53.469 [71/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:53.469 [72/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:53.469 [73/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:53.469 [74/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:53.469 [75/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:53.469 [76/707] Linking static target lib/librte_ring.a 00:02:53.469 [77/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:53.469 [78/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:53.469 [79/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:53.469 [80/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:53.469 [81/707] Linking static target lib/librte_cmdline.a 00:02:53.469 [82/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:53.469 [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:53.469 [84/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:53.469 [85/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:53.469 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:53.469 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:53.469 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:53.469 [89/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:53.469 [90/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.469 [91/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:53.469 [92/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:53.469 [93/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:53.469 [94/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:53.469 [95/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:53.469 [96/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:53.469 [97/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:53.729 [98/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:53.729 [99/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:53.729 [100/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:53.729 [101/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:53.729 [102/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.729 [103/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:53.729 [104/707] Linking static target lib/librte_net.a 00:02:53.729 [105/707] Linking static target lib/librte_metrics.a 00:02:53.729 [106/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:53.729 [107/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:53.729 [108/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:53.729 [109/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:53.729 [110/707] Linking target lib/librte_log.so.24.0 00:02:53.729 [111/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:53.729 [112/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:53.729 [113/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.729 [114/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:53.729 [115/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:53.729 [116/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:53.729 [117/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:53.729 [118/707] Linking static target lib/librte_cfgfile.a 00:02:53.729 [119/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.729 [120/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.013 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:54.013 [122/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:54.013 [123/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:54.013 [124/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:54.013 [125/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:54.013 [126/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:54.013 [127/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:54.014 [128/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:54.014 [129/707] Linking target lib/librte_kvargs.so.24.0 00:02:54.014 [130/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:54.014 [131/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:54.014 [132/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:54.014 [133/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:54.014 [134/707] Linking static target lib/librte_bitratestats.a 00:02:54.014 [135/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.014 [136/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.014 [137/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:54.014 [138/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:54.014 [139/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:54.014 [140/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:54.014 [141/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:54.014 [142/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:54.014 [143/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:54.014 [144/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:54.014 [145/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:54.014 [146/707] Linking static target lib/librte_mempool.a 00:02:54.014 [147/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:54.014 [148/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:54.014 [149/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:54.285 [150/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.285 [151/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:54.285 [152/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:54.285 [153/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:54.285 [154/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:54.285 [155/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:54.285 [156/707] Linking static target lib/librte_timer.a 00:02:54.285 [157/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:54.285 [158/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:54.285 [159/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:54.285 [160/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:54.285 [161/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:54.285 [162/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:54.285 [163/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:54.285 [164/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:54.285 [165/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:54.285 [166/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:54.285 [167/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.285 [168/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.285 [169/707] Linking static target lib/librte_compressdev.a 00:02:54.285 [170/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:54.285 [171/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:54.285 [172/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:54.285 [173/707] Linking static target lib/librte_jobstats.a 00:02:54.285 [174/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:54.285 [175/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:54.285 [176/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:54.285 [177/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:54.285 [178/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:54.285 [179/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:54.285 [180/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:54.285 [181/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:54.285 [182/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:54.285 [183/707] Linking static target lib/librte_bbdev.a 00:02:54.285 [184/707] Linking static target lib/librte_dispatcher.a 00:02:54.546 [185/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:54.546 [186/707] Linking static target lib/librte_rcu.a 00:02:54.546 [187/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:54.546 [188/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:54.546 [189/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:54.546 [190/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:54.546 [191/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:54.546 [192/707] Linking static target lib/librte_telemetry.a 00:02:54.546 [193/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:54.546 [194/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:54.546 [195/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:54.546 [196/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:54.546 [197/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:54.546 [198/707] Linking static target lib/librte_eal.a 00:02:54.546 [199/707] Linking static target lib/librte_gpudev.a 00:02:54.546 [200/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:54.546 [201/707] Linking static target lib/librte_gro.a 00:02:54.546 [202/707] Linking static target lib/librte_latencystats.a 00:02:54.546 [203/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:54.546 [204/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:54.546 [205/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:54.546 [206/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:54.546 [207/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:54.546 [208/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:54.546 [209/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:54.546 [210/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:54.546 [211/707] Linking static target lib/librte_dmadev.a 00:02:54.546 [212/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.546 [213/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:54.546 [214/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:54.546 [215/707] Linking static target lib/librte_gso.a 00:02:54.546 [216/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.546 [217/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:54.546 [218/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:54.546 [219/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:54.546 [220/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:54.807 [221/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:54.807 [222/707] Linking static target lib/librte_distributor.a 00:02:54.807 [223/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:54.807 [224/707] Linking static target lib/librte_mbuf.a 00:02:54.807 [225/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:54.807 [226/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:54.807 [227/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:54.807 [228/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.807 [229/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:54.807 [230/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:54.807 [231/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:54.807 [232/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:54.807 [233/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:54.807 [234/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:54.807 [235/707] Linking static target lib/librte_stack.a 00:02:54.807 [236/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:54.807 [237/707] Linking static target lib/librte_ip_frag.a 00:02:54.807 [238/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:54.807 [239/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:54.807 [240/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.807 [241/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.808 [242/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.808 [243/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:54.808 [244/707] Linking static target lib/librte_regexdev.a 00:02:54.808 [245/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.808 [246/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.067 [247/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:55.067 [248/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:55.067 [249/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:55.067 [250/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:55.067 [251/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:55.067 [252/707] Linking static target lib/librte_mldev.a 00:02:55.067 [253/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:55.067 [254/707] Linking static target lib/librte_rawdev.a 00:02:55.067 [255/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.067 [256/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:55.067 [257/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:55.067 [258/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:55.067 [259/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:55.067 [260/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:55.067 [261/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:55.067 [262/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.067 [263/707] Linking static target lib/librte_power.a 00:02:55.067 [264/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.067 [265/707] Linking static target lib/librte_pcapng.a 00:02:55.067 [266/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:55.067 [267/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.067 [268/707] Linking static target lib/librte_reorder.a 00:02:55.067 [269/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:55.067 [270/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.067 [271/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:55.067 [272/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:55.067 [273/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:55.067 [274/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:55.067 [275/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.067 [276/707] Linking static target lib/librte_bpf.a 00:02:55.067 [277/707] Linking target lib/librte_telemetry.so.24.0 00:02:55.331 [278/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:55.331 [279/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:55.331 [280/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:55.331 [281/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:55.331 [282/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:55.331 [283/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.331 [284/707] Linking static target lib/librte_security.a 00:02:55.331 [285/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.331 [286/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:55.331 [287/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:55.331 [288/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:55.331 [289/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:55.331 [290/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:55.331 [291/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:55.331 [292/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:55.331 [293/707] Linking static target lib/librte_lpm.a 00:02:55.331 [294/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:55.332 [295/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:55.332 [296/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.332 [297/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:55.332 [298/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:55.332 [299/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.596 [300/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.596 [301/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:55.596 [302/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:55.596 [303/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:55.596 [304/707] Linking static target lib/librte_rib.a 00:02:55.596 [305/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:55.596 [306/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:55.596 [307/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.596 [308/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:55.596 [309/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:55.596 [310/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.596 [311/707] Linking static target lib/librte_efd.a 00:02:55.596 [312/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:55.596 [313/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:55.596 [314/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:55.596 [315/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.596 [316/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:55.596 [317/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:55.596 [318/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:55.596 [319/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:55.858 [320/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:55.858 [321/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:55.858 [322/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:55.858 [323/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:55.858 [324/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.858 [325/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:55.858 [326/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:55.858 [327/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:55.858 [328/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:55.858 [329/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:55.858 [330/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.858 [331/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:55.858 [332/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:55.858 [333/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:55.858 [334/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.858 [335/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:55.858 [336/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:55.858 [337/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:55.858 [338/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:55.858 [339/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.858 [340/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:55.858 [341/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:56.118 [342/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.118 [343/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:56.118 [344/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.118 [345/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:56.118 [346/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:56.118 [347/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:56.118 [348/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:56.118 [349/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:56.118 [350/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:56.118 [351/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:56.118 [352/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:56.118 [353/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:56.118 [354/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:56.118 [355/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:56.118 [356/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:56.118 [357/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:56.118 [358/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.118 [359/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:56.376 [360/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:56.376 [361/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:56.376 [362/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:56.376 [363/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:56.376 [364/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:56.376 [365/707] Linking static target lib/librte_fib.a 00:02:56.376 [366/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:56.376 [367/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:56.376 [368/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:56.376 [369/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:56.376 [370/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:56.376 [371/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:56.376 [372/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:56.377 [373/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:56.377 [374/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:56.377 [375/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:56.377 [376/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:56.377 [377/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:56.377 [378/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:56.377 [379/707] Linking static target lib/librte_pdump.a 00:02:56.637 [380/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:56.637 [381/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:56.637 [382/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:56.637 [383/707] Linking static target lib/librte_graph.a 00:02:56.637 [384/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:56.637 [385/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:56.637 [386/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:56.637 [387/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:56.637 [388/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:56.637 [389/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:56.637 [390/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:56.637 [391/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:56.637 [392/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:56.637 [393/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:56.637 [394/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:56.637 [395/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:56.637 [396/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:56.637 [397/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:56.899 [398/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:56.899 [399/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:56.899 [400/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:56.899 [401/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:56.899 [402/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.899 [403/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:56.899 [404/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:56.899 [405/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:56.899 [406/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:56.899 [407/707] Linking static target lib/librte_sched.a 00:02:56.899 [408/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.899 [409/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:56.899 [410/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:56.899 [411/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.899 [412/707] Linking static target lib/librte_cryptodev.a 00:02:56.899 [413/707] Linking static target drivers/librte_bus_vdev.a 00:02:56.899 [414/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:56.899 [415/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.899 [416/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.899 [417/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:56.899 [418/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:56.899 [419/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:56.899 [420/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:56.899 [421/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:56.899 [422/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:56.899 [423/707] Linking static target lib/librte_table.a 00:02:56.899 [424/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:56.899 [425/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:56.899 [426/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:56.899 [427/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:57.158 [428/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:57.158 [429/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:57.158 [430/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:57.158 [431/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:57.158 [432/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:57.158 [433/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:57.158 [434/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.158 [435/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:57.158 [436/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:57.158 [437/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.158 [438/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:57.158 [439/707] Linking static target drivers/librte_bus_pci.a 00:02:57.158 [440/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:57.158 [441/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.158 [442/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:57.158 [443/707] Linking static target lib/librte_member.a 00:02:57.158 [444/707] Linking static target lib/librte_ipsec.a 00:02:57.158 [445/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:57.158 [446/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:57.158 [447/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.158 [448/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:57.418 [449/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:57.418 [450/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:57.418 [451/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.418 [452/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.418 [453/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:57.418 [454/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:57.418 [455/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.418 [456/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:57.418 [457/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.418 [458/707] Linking static target lib/librte_hash.a 00:02:57.418 [459/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:57.418 [460/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:57.418 [461/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:57.418 [462/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:57.418 [463/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:57.418 [464/707] Linking static target lib/librte_node.a 00:02:57.418 [465/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:57.418 [466/707] Linking static target lib/acl/libavx2_tmp.a 00:02:57.418 [467/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:57.418 [468/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:57.419 [469/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:57.681 [470/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:57.681 [471/707] Linking static target lib/librte_pdcp.a 00:02:57.681 [472/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:57.681 [473/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:57.681 [474/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:57.681 [475/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:57.681 [476/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:57.681 [477/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:57.681 [478/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:57.681 [479/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.681 [480/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:57.681 [481/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:57.681 [482/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:57.681 [483/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:57.681 [484/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:57.681 [485/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:57.681 [486/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.681 [487/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:57.681 [488/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.681 [489/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.681 [490/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:57.681 [491/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:57.681 [492/707] Linking static target drivers/librte_mempool_ring.a 00:02:57.681 [493/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:57.681 [494/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:57.681 [495/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:57.681 [496/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:57.681 [497/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:57.681 [498/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:57.941 [499/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:57.941 [500/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:57.941 [501/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:57.941 [502/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:57.941 [503/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:57.941 [504/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:57.941 [505/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:57.941 [506/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:57.941 [507/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:57.941 [508/707] Linking static target lib/librte_eventdev.a 00:02:57.941 [509/707] Linking static target lib/librte_port.a 00:02:57.941 [510/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.941 [511/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:57.941 [512/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:57.941 [513/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:57.941 [514/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.941 [515/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.941 [516/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:57.941 [517/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:57.941 [518/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:57.941 [519/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.941 [520/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:57.941 [521/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:57.941 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:57.941 [523/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:57.941 [524/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:57.941 [525/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:57.941 [526/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:58.200 [527/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.200 [528/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:58.200 [529/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:58.200 [530/707] Linking static target lib/librte_acl.a 00:02:58.200 [531/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:58.200 [532/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:58.200 [533/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:58.200 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:58.200 [535/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:58.200 [536/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:58.200 [537/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:58.200 [538/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:58.200 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:58.200 [540/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:58.200 [541/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:58.200 [542/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:58.200 [543/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:58.200 [544/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:58.459 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:58.459 [546/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:58.459 [547/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:58.459 [548/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:58.459 [549/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.459 [550/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:58.459 [551/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.459 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:58.459 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:58.459 [554/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.459 [555/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:58.459 [556/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:58.459 [557/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:58.459 [558/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:58.718 [559/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:58.718 [560/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:58.718 [561/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:58.718 [562/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:58.718 [563/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:58.718 [564/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:58.718 [565/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:58.718 [566/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:58.718 [567/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:58.718 [568/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:58.977 [569/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:58.977 [570/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:58.977 [571/707] Linking static target lib/librte_ethdev.a 00:02:58.977 [572/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:58.977 [573/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:58.977 [574/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:59.236 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:59.494 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:59.494 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:00.060 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:00.060 [579/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:00.319 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:00.577 [581/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.577 [582/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:00.577 [583/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:00.837 [584/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:00.837 [585/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:01.095 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.095 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.095 [588/707] Linking static target drivers/librte_net_i40e.a 00:03:01.095 [589/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:02.027 [590/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.027 [591/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:02.962 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:03.897 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.897 [594/707] Linking target lib/librte_eal.so.24.0 00:03:03.897 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:03.897 [596/707] Linking target lib/librte_meter.so.24.0 00:03:03.897 [597/707] Linking target lib/librte_cfgfile.so.24.0 00:03:03.897 [598/707] Linking target lib/librte_jobstats.so.24.0 00:03:03.897 [599/707] Linking target lib/librte_ring.so.24.0 00:03:03.897 [600/707] Linking target lib/librte_timer.so.24.0 00:03:03.897 [601/707] Linking target lib/librte_pci.so.24.0 00:03:03.897 [602/707] Linking target lib/librte_dmadev.so.24.0 00:03:03.897 [603/707] Linking target lib/librte_rawdev.so.24.0 00:03:03.897 [604/707] Linking target lib/librte_stack.so.24.0 00:03:03.897 [605/707] Linking target drivers/librte_bus_vdev.so.24.0 00:03:03.897 [606/707] Linking target lib/librte_acl.so.24.0 00:03:04.156 [607/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:04.156 [608/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:04.156 [609/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:04.156 [610/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:04.156 [611/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:04.156 [612/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:04.156 [613/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:04.156 [614/707] Linking target lib/librte_mempool.so.24.0 00:03:04.156 [615/707] Linking target drivers/librte_bus_pci.so.24.0 00:03:04.156 [616/707] Linking target lib/librte_rcu.so.24.0 00:03:04.156 [617/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:04.156 [618/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:04.156 [619/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:04.415 [620/707] Linking target lib/librte_mbuf.so.24.0 00:03:04.415 [621/707] Linking target lib/librte_rib.so.24.0 00:03:04.415 [622/707] Linking target drivers/librte_mempool_ring.so.24.0 00:03:04.415 [623/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:04.415 [624/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:04.415 [625/707] Linking target lib/librte_bbdev.so.24.0 00:03:04.415 [626/707] Linking target lib/librte_fib.so.24.0 00:03:04.415 [627/707] Linking target lib/librte_net.so.24.0 00:03:04.415 [628/707] Linking target lib/librte_regexdev.so.24.0 00:03:04.415 [629/707] Linking target lib/librte_gpudev.so.24.0 00:03:04.415 [630/707] Linking target lib/librte_reorder.so.24.0 00:03:04.415 [631/707] Linking target lib/librte_distributor.so.24.0 00:03:04.415 [632/707] Linking target lib/librte_compressdev.so.24.0 00:03:04.415 [633/707] Linking target lib/librte_sched.so.24.0 00:03:04.415 [634/707] Linking target lib/librte_cryptodev.so.24.0 00:03:04.415 [635/707] Linking target lib/librte_mldev.so.24.0 00:03:04.673 [636/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:04.673 [637/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:04.673 [638/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:04.673 [639/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:04.673 [640/707] Linking target lib/librte_hash.so.24.0 00:03:04.673 [641/707] Linking target lib/librte_cmdline.so.24.0 00:03:04.673 [642/707] Linking target lib/librte_security.so.24.0 00:03:04.673 [643/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:04.673 [644/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:04.931 [645/707] Linking target lib/librte_member.so.24.0 00:03:04.931 [646/707] Linking target lib/librte_efd.so.24.0 00:03:04.931 [647/707] Linking target lib/librte_lpm.so.24.0 00:03:04.931 [648/707] Linking target lib/librte_pdcp.so.24.0 00:03:04.931 [649/707] Linking target lib/librte_ipsec.so.24.0 00:03:04.931 [650/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:04.931 [651/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:05.866 [652/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.866 [653/707] Linking target lib/librte_ethdev.so.24.0 00:03:05.867 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:05.867 [655/707] Linking target lib/librte_eventdev.so.24.0 00:03:05.867 [656/707] Linking target lib/librte_metrics.so.24.0 00:03:05.867 [657/707] Linking target lib/librte_pcapng.so.24.0 00:03:05.867 [658/707] Linking target lib/librte_gso.so.24.0 00:03:05.867 [659/707] Linking target lib/librte_bpf.so.24.0 00:03:05.867 [660/707] Linking target lib/librte_gro.so.24.0 00:03:05.867 [661/707] Linking target lib/librte_ip_frag.so.24.0 00:03:05.867 [662/707] Linking target lib/librte_power.so.24.0 00:03:05.867 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:03:06.125 [664/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:06.125 [665/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:06.125 [666/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:06.125 [667/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:06.125 [668/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:06.125 [669/707] Linking target lib/librte_bitratestats.so.24.0 00:03:06.125 [670/707] Linking target lib/librte_latencystats.so.24.0 00:03:06.125 [671/707] Linking target lib/librte_pdump.so.24.0 00:03:06.125 [672/707] Linking target lib/librte_graph.so.24.0 00:03:06.125 [673/707] Linking target lib/librte_dispatcher.so.24.0 00:03:06.125 [674/707] Linking target lib/librte_port.so.24.0 00:03:06.125 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:06.125 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:06.383 [677/707] Linking target lib/librte_node.so.24.0 00:03:06.383 [678/707] Linking target lib/librte_table.so.24.0 00:03:06.383 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:08.913 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:08.913 [681/707] Linking static target lib/librte_pipeline.a 00:03:09.480 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:09.480 [683/707] Linking static target lib/librte_vhost.a 00:03:10.047 [684/707] Linking target app/dpdk-test-dma-perf 00:03:10.047 [685/707] Linking target app/dpdk-dumpcap 00:03:10.047 [686/707] Linking target app/dpdk-test-cmdline 00:03:10.047 [687/707] Linking target app/dpdk-test-acl 00:03:10.047 [688/707] Linking target app/dpdk-graph 00:03:10.047 [689/707] Linking target app/dpdk-test-crypto-perf 00:03:10.047 [690/707] Linking target app/dpdk-test-fib 00:03:10.047 [691/707] Linking target app/dpdk-test-mldev 00:03:10.047 [692/707] Linking target app/dpdk-pdump 00:03:10.047 [693/707] Linking target app/dpdk-test-flow-perf 00:03:10.047 [694/707] Linking target app/dpdk-test-regex 00:03:10.047 [695/707] Linking target app/dpdk-test-eventdev 00:03:10.047 [696/707] Linking target app/dpdk-test-gpudev 00:03:10.047 [697/707] Linking target app/dpdk-proc-info 00:03:10.047 [698/707] Linking target app/dpdk-test-compress-perf 00:03:10.047 [699/707] Linking target app/dpdk-test-sad 00:03:10.047 [700/707] Linking target app/dpdk-test-pipeline 00:03:10.047 [701/707] Linking target app/dpdk-test-security-perf 00:03:10.047 [702/707] Linking target app/dpdk-test-bbdev 00:03:10.047 [703/707] Linking target app/dpdk-testpmd 00:03:11.474 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.474 [705/707] Linking target lib/librte_vhost.so.24.0 00:03:13.382 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.382 [707/707] Linking target lib/librte_pipeline.so.24.0 00:03:13.382 13:33:15 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:03:13.382 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:13.382 [0/1] Installing files. 00:03:13.644 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.644 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:13.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:13.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:13.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:13.649 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:13.649 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.649 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.649 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.649 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.649 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.649 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.649 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.649 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.649 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.649 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.650 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:13.912 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:13.912 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:13.912 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:13.912 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:13.912 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.912 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:13.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.177 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:14.178 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:14.178 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:14.178 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:14.178 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:14.178 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:14.178 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:14.178 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:14.178 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:14.178 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:14.178 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:14.178 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:14.178 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:14.178 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:14.178 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:14.178 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:14.178 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:14.178 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:14.178 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:14.178 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:14.178 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:14.178 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:14.178 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:14.178 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:14.178 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:14.178 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:14.178 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:14.178 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:14.178 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:14.178 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:14.178 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:14.178 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:14.178 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:14.178 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:14.178 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:14.178 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:14.178 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:14.178 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:14.178 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:14.178 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:14.178 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:14.178 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:14.178 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:14.178 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:14.179 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:14.179 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:14.179 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:14.179 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:14.179 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:14.179 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:14.179 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:14.179 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:14.179 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:14.179 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:14.179 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:14.179 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:14.179 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:14.179 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:14.179 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:14.179 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:14.179 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:14.179 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:14.179 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:14.179 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:14.179 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:14.179 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:14.179 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:14.179 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:14.179 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:14.179 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:14.179 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:14.179 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:14.179 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:14.179 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:14.179 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:14.179 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:14.179 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:14.179 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:14.179 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:14.179 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:14.179 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:14.179 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:14.179 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:14.179 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:14.179 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:14.179 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:14.179 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:14.179 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:14.179 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:14.179 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:14.179 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:14.179 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:14.179 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:14.179 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:14.179 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:14.179 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:14.179 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:14.179 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:14.179 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:14.179 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:14.179 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:14.179 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:14.179 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:14.179 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:14.179 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:14.179 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:14.179 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:14.179 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:14.179 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:14.179 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:14.179 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:14.179 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:14.179 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:14.179 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:14.179 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:14.179 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:14.179 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:14.179 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:14.179 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:14.179 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:14.179 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:14.179 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:14.179 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:14.179 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:14.179 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:14.179 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:14.179 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:14.179 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:14.179 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:14.179 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:14.179 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:14.179 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:14.179 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:14.179 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:14.179 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:14.179 13:33:16 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:14.179 13:33:16 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:14.179 13:33:16 -- common/autobuild_common.sh@200 -- $ cat 00:03:14.179 13:33:16 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:14.179 00:03:14.179 real 0m26.733s 00:03:14.179 user 8m25.863s 00:03:14.179 sys 1m56.331s 00:03:14.179 13:33:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:14.179 13:33:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.179 ************************************ 00:03:14.179 END TEST build_native_dpdk 00:03:14.179 ************************************ 00:03:14.179 13:33:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:14.179 13:33:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:14.179 13:33:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:14.179 13:33:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:14.179 13:33:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:14.179 13:33:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:14.179 13:33:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:14.179 13:33:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:14.180 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:14.438 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:14.438 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:14.438 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:14.696 Using 'verbs' RDMA provider 00:03:27.465 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:03:37.437 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:37.437 Creating mk/config.mk...done. 00:03:37.437 Creating mk/cc.flags.mk...done. 00:03:37.437 Type 'make' to build. 00:03:37.437 13:33:39 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:03:37.437 13:33:39 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:37.437 13:33:39 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:37.437 13:33:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:37.437 ************************************ 00:03:37.437 START TEST make 00:03:37.437 ************************************ 00:03:37.437 13:33:39 -- common/autotest_common.sh@1104 -- $ make -j96 00:03:38.003 make[1]: Nothing to be done for 'all'. 00:03:38.943 The Meson build system 00:03:38.943 Version: 1.3.1 00:03:38.943 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:38.943 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:38.943 Build type: native build 00:03:38.943 Project name: libvfio-user 00:03:38.943 Project version: 0.0.1 00:03:38.943 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:38.943 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:38.943 Host machine cpu family: x86_64 00:03:38.943 Host machine cpu: x86_64 00:03:38.943 Run-time dependency threads found: YES 00:03:38.943 Library dl found: YES 00:03:38.943 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:38.943 Run-time dependency json-c found: YES 0.17 00:03:38.943 Run-time dependency cmocka found: YES 1.1.7 00:03:38.943 Program pytest-3 found: NO 00:03:38.943 Program flake8 found: NO 00:03:38.943 Program misspell-fixer found: NO 00:03:38.943 Program restructuredtext-lint found: NO 00:03:38.943 Program valgrind found: YES (/usr/bin/valgrind) 00:03:38.943 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:38.943 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:38.943 Compiler for C supports arguments -Wwrite-strings: YES 00:03:38.943 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:38.943 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:38.943 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:38.943 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:38.943 Build targets in project: 8 00:03:38.943 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:38.943 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:38.943 00:03:38.943 libvfio-user 0.0.1 00:03:38.943 00:03:38.943 User defined options 00:03:38.943 buildtype : debug 00:03:38.943 default_library: shared 00:03:38.943 libdir : /usr/local/lib 00:03:38.943 00:03:38.943 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:39.518 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:39.518 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:39.518 [2/37] Compiling C object samples/null.p/null.c.o 00:03:39.518 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:39.518 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:39.518 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:39.518 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:39.778 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:39.778 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:39.778 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:39.778 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:39.778 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:39.778 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:39.778 [13/37] Compiling C object samples/server.p/server.c.o 00:03:39.778 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:39.778 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:39.778 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:39.778 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:39.778 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:39.778 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:39.778 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:39.778 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:39.778 [22/37] Compiling C object samples/client.p/client.c.o 00:03:39.778 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:39.778 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:39.778 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:39.778 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:39.778 [27/37] Linking target samples/client 00:03:39.778 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:39.778 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:39.778 [30/37] Linking target test/unit_tests 00:03:39.778 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:03:40.034 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:40.034 [33/37] Linking target samples/shadow_ioeventfd_server 00:03:40.034 [34/37] Linking target samples/null 00:03:40.034 [35/37] Linking target samples/lspci 00:03:40.034 [36/37] Linking target samples/gpio-pci-idio-16 00:03:40.034 [37/37] Linking target samples/server 00:03:40.034 INFO: autodetecting backend as ninja 00:03:40.034 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:40.034 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:40.291 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:40.291 ninja: no work to do. 00:03:48.459 CC lib/log/log_flags.o 00:03:48.459 CC lib/log/log.o 00:03:48.459 CC lib/ut_mock/mock.o 00:03:48.459 CC lib/log/log_deprecated.o 00:03:48.459 CC lib/ut/ut.o 00:03:48.459 LIB libspdk_ut_mock.a 00:03:48.459 LIB libspdk_log.a 00:03:48.459 LIB libspdk_ut.a 00:03:48.459 SO libspdk_ut_mock.so.5.0 00:03:48.459 SO libspdk_log.so.6.1 00:03:48.459 SO libspdk_ut.so.1.0 00:03:48.459 SYMLINK libspdk_ut_mock.so 00:03:48.459 SYMLINK libspdk_log.so 00:03:48.459 SYMLINK libspdk_ut.so 00:03:48.459 CC lib/ioat/ioat.o 00:03:48.459 CC lib/util/base64.o 00:03:48.459 CXX lib/trace_parser/trace.o 00:03:48.459 CC lib/util/bit_array.o 00:03:48.459 CC lib/util/cpuset.o 00:03:48.459 CC lib/util/crc32c.o 00:03:48.459 CC lib/util/crc16.o 00:03:48.459 CC lib/util/crc32.o 00:03:48.459 CC lib/util/crc32_ieee.o 00:03:48.459 CC lib/util/crc64.o 00:03:48.459 CC lib/dma/dma.o 00:03:48.459 CC lib/util/dif.o 00:03:48.459 CC lib/util/fd.o 00:03:48.459 CC lib/util/file.o 00:03:48.459 CC lib/util/hexlify.o 00:03:48.459 CC lib/util/iov.o 00:03:48.459 CC lib/util/math.o 00:03:48.459 CC lib/util/pipe.o 00:03:48.459 CC lib/util/strerror_tls.o 00:03:48.459 CC lib/util/string.o 00:03:48.459 CC lib/util/uuid.o 00:03:48.459 CC lib/util/fd_group.o 00:03:48.459 CC lib/util/xor.o 00:03:48.459 CC lib/util/zipf.o 00:03:48.459 CC lib/vfio_user/host/vfio_user_pci.o 00:03:48.459 CC lib/vfio_user/host/vfio_user.o 00:03:48.459 LIB libspdk_dma.a 00:03:48.459 SO libspdk_dma.so.3.0 00:03:48.459 LIB libspdk_ioat.a 00:03:48.717 SO libspdk_ioat.so.6.0 00:03:48.717 SYMLINK libspdk_dma.so 00:03:48.717 SYMLINK libspdk_ioat.so 00:03:48.717 LIB libspdk_vfio_user.a 00:03:48.717 SO libspdk_vfio_user.so.4.0 00:03:48.717 SYMLINK libspdk_vfio_user.so 00:03:48.717 LIB libspdk_util.a 00:03:48.717 SO libspdk_util.so.8.0 00:03:48.975 SYMLINK libspdk_util.so 00:03:48.975 LIB libspdk_trace_parser.a 00:03:48.975 SO libspdk_trace_parser.so.4.0 00:03:49.232 CC lib/vmd/vmd.o 00:03:49.232 CC lib/vmd/led.o 00:03:49.232 CC lib/json/json_parse.o 00:03:49.232 CC lib/json/json_util.o 00:03:49.232 CC lib/idxd/idxd.o 00:03:49.232 CC lib/idxd/idxd_user.o 00:03:49.232 CC lib/json/json_write.o 00:03:49.232 CC lib/idxd/idxd_kernel.o 00:03:49.232 SYMLINK libspdk_trace_parser.so 00:03:49.232 CC lib/conf/conf.o 00:03:49.232 CC lib/rdma/common.o 00:03:49.232 CC lib/rdma/rdma_verbs.o 00:03:49.232 CC lib/env_dpdk/env.o 00:03:49.232 CC lib/env_dpdk/memory.o 00:03:49.232 CC lib/env_dpdk/pci.o 00:03:49.232 CC lib/env_dpdk/init.o 00:03:49.232 CC lib/env_dpdk/threads.o 00:03:49.232 CC lib/env_dpdk/pci_ioat.o 00:03:49.232 CC lib/env_dpdk/pci_virtio.o 00:03:49.232 CC lib/env_dpdk/pci_vmd.o 00:03:49.232 CC lib/env_dpdk/pci_idxd.o 00:03:49.232 CC lib/env_dpdk/pci_event.o 00:03:49.232 CC lib/env_dpdk/sigbus_handler.o 00:03:49.232 CC lib/env_dpdk/pci_dpdk.o 00:03:49.232 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:49.232 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:49.232 LIB libspdk_conf.a 00:03:49.490 SO libspdk_conf.so.5.0 00:03:49.490 LIB libspdk_json.a 00:03:49.490 LIB libspdk_rdma.a 00:03:49.490 SO libspdk_rdma.so.5.0 00:03:49.490 SO libspdk_json.so.5.1 00:03:49.490 SYMLINK libspdk_conf.so 00:03:49.490 SYMLINK libspdk_json.so 00:03:49.490 SYMLINK libspdk_rdma.so 00:03:49.490 LIB libspdk_idxd.a 00:03:49.490 SO libspdk_idxd.so.11.0 00:03:49.746 LIB libspdk_vmd.a 00:03:49.746 SO libspdk_vmd.so.5.0 00:03:49.746 SYMLINK libspdk_idxd.so 00:03:49.746 CC lib/jsonrpc/jsonrpc_server.o 00:03:49.746 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:49.746 CC lib/jsonrpc/jsonrpc_client.o 00:03:49.746 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:49.746 SYMLINK libspdk_vmd.so 00:03:50.003 LIB libspdk_jsonrpc.a 00:03:50.004 SO libspdk_jsonrpc.so.5.1 00:03:50.004 SYMLINK libspdk_jsonrpc.so 00:03:50.004 LIB libspdk_env_dpdk.a 00:03:50.261 SO libspdk_env_dpdk.so.13.0 00:03:50.261 CC lib/rpc/rpc.o 00:03:50.261 SYMLINK libspdk_env_dpdk.so 00:03:50.261 LIB libspdk_rpc.a 00:03:50.518 SO libspdk_rpc.so.5.0 00:03:50.518 SYMLINK libspdk_rpc.so 00:03:50.518 CC lib/notify/notify.o 00:03:50.518 CC lib/notify/notify_rpc.o 00:03:50.775 CC lib/trace/trace_flags.o 00:03:50.775 CC lib/trace/trace.o 00:03:50.775 CC lib/trace/trace_rpc.o 00:03:50.775 CC lib/sock/sock.o 00:03:50.775 CC lib/sock/sock_rpc.o 00:03:50.775 LIB libspdk_notify.a 00:03:50.775 SO libspdk_notify.so.5.0 00:03:50.775 LIB libspdk_trace.a 00:03:50.775 SYMLINK libspdk_notify.so 00:03:50.775 SO libspdk_trace.so.9.0 00:03:51.032 SYMLINK libspdk_trace.so 00:03:51.032 LIB libspdk_sock.a 00:03:51.032 SO libspdk_sock.so.8.0 00:03:51.032 SYMLINK libspdk_sock.so 00:03:51.032 CC lib/thread/thread.o 00:03:51.032 CC lib/thread/iobuf.o 00:03:51.291 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:51.291 CC lib/nvme/nvme_ctrlr.o 00:03:51.291 CC lib/nvme/nvme_fabric.o 00:03:51.291 CC lib/nvme/nvme_ns_cmd.o 00:03:51.291 CC lib/nvme/nvme_ns.o 00:03:51.291 CC lib/nvme/nvme_pcie_common.o 00:03:51.291 CC lib/nvme/nvme.o 00:03:51.291 CC lib/nvme/nvme_pcie.o 00:03:51.291 CC lib/nvme/nvme_qpair.o 00:03:51.291 CC lib/nvme/nvme_quirks.o 00:03:51.291 CC lib/nvme/nvme_transport.o 00:03:51.291 CC lib/nvme/nvme_discovery.o 00:03:51.291 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:51.291 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:51.291 CC lib/nvme/nvme_tcp.o 00:03:51.291 CC lib/nvme/nvme_opal.o 00:03:51.291 CC lib/nvme/nvme_zns.o 00:03:51.291 CC lib/nvme/nvme_io_msg.o 00:03:51.291 CC lib/nvme/nvme_poll_group.o 00:03:51.291 CC lib/nvme/nvme_cuse.o 00:03:51.291 CC lib/nvme/nvme_vfio_user.o 00:03:51.291 CC lib/nvme/nvme_rdma.o 00:03:52.225 LIB libspdk_thread.a 00:03:52.225 SO libspdk_thread.so.9.0 00:03:52.225 SYMLINK libspdk_thread.so 00:03:52.482 CC lib/init/json_config.o 00:03:52.482 CC lib/init/subsystem.o 00:03:52.482 CC lib/init/subsystem_rpc.o 00:03:52.482 CC lib/init/rpc.o 00:03:52.482 CC lib/vfu_tgt/tgt_endpoint.o 00:03:52.482 CC lib/vfu_tgt/tgt_rpc.o 00:03:52.482 CC lib/accel/accel.o 00:03:52.482 CC lib/accel/accel_rpc.o 00:03:52.482 CC lib/accel/accel_sw.o 00:03:52.482 CC lib/virtio/virtio.o 00:03:52.482 CC lib/virtio/virtio_vhost_user.o 00:03:52.482 CC lib/virtio/virtio_vfio_user.o 00:03:52.482 CC lib/virtio/virtio_pci.o 00:03:52.482 CC lib/blob/blobstore.o 00:03:52.482 CC lib/blob/request.o 00:03:52.482 CC lib/blob/zeroes.o 00:03:52.482 CC lib/blob/blob_bs_dev.o 00:03:52.739 LIB libspdk_init.a 00:03:52.739 SO libspdk_init.so.4.0 00:03:52.739 LIB libspdk_vfu_tgt.a 00:03:52.739 LIB libspdk_virtio.a 00:03:52.739 SO libspdk_vfu_tgt.so.2.0 00:03:52.739 SYMLINK libspdk_init.so 00:03:52.739 SO libspdk_virtio.so.6.0 00:03:52.739 LIB libspdk_nvme.a 00:03:52.739 SYMLINK libspdk_vfu_tgt.so 00:03:52.739 SYMLINK libspdk_virtio.so 00:03:52.998 SO libspdk_nvme.so.12.0 00:03:52.998 CC lib/event/app.o 00:03:52.998 CC lib/event/reactor.o 00:03:52.998 CC lib/event/app_rpc.o 00:03:52.998 CC lib/event/log_rpc.o 00:03:52.998 CC lib/event/scheduler_static.o 00:03:52.998 SYMLINK libspdk_nvme.so 00:03:53.256 LIB libspdk_accel.a 00:03:53.256 SO libspdk_accel.so.14.0 00:03:53.256 LIB libspdk_event.a 00:03:53.256 SYMLINK libspdk_accel.so 00:03:53.256 SO libspdk_event.so.12.0 00:03:53.256 SYMLINK libspdk_event.so 00:03:53.515 CC lib/bdev/bdev.o 00:03:53.515 CC lib/bdev/bdev_rpc.o 00:03:53.515 CC lib/bdev/bdev_zone.o 00:03:53.515 CC lib/bdev/part.o 00:03:53.515 CC lib/bdev/scsi_nvme.o 00:03:54.450 LIB libspdk_blob.a 00:03:54.450 SO libspdk_blob.so.10.1 00:03:54.450 SYMLINK libspdk_blob.so 00:03:54.708 CC lib/lvol/lvol.o 00:03:54.708 CC lib/blobfs/blobfs.o 00:03:54.708 CC lib/blobfs/tree.o 00:03:55.275 LIB libspdk_bdev.a 00:03:55.275 SO libspdk_bdev.so.14.0 00:03:55.275 LIB libspdk_blobfs.a 00:03:55.275 SO libspdk_blobfs.so.9.0 00:03:55.275 LIB libspdk_lvol.a 00:03:55.275 SYMLINK libspdk_bdev.so 00:03:55.275 SO libspdk_lvol.so.9.1 00:03:55.275 SYMLINK libspdk_blobfs.so 00:03:55.275 SYMLINK libspdk_lvol.so 00:03:55.535 CC lib/scsi/dev.o 00:03:55.535 CC lib/scsi/scsi.o 00:03:55.535 CC lib/scsi/lun.o 00:03:55.535 CC lib/scsi/port.o 00:03:55.535 CC lib/scsi/scsi_bdev.o 00:03:55.535 CC lib/scsi/scsi_pr.o 00:03:55.535 CC lib/scsi/scsi_rpc.o 00:03:55.535 CC lib/nvmf/ctrlr.o 00:03:55.535 CC lib/scsi/task.o 00:03:55.535 CC lib/nvmf/ctrlr_discovery.o 00:03:55.535 CC lib/nvmf/ctrlr_bdev.o 00:03:55.535 CC lib/nvmf/subsystem.o 00:03:55.535 CC lib/nvmf/nvmf.o 00:03:55.535 CC lib/nvmf/nvmf_rpc.o 00:03:55.535 CC lib/nvmf/transport.o 00:03:55.535 CC lib/nvmf/tcp.o 00:03:55.535 CC lib/nvmf/vfio_user.o 00:03:55.535 CC lib/nvmf/rdma.o 00:03:55.535 CC lib/ftl/ftl_core.o 00:03:55.535 CC lib/ftl/ftl_init.o 00:03:55.535 CC lib/ftl/ftl_layout.o 00:03:55.535 CC lib/ftl/ftl_io.o 00:03:55.535 CC lib/ftl/ftl_debug.o 00:03:55.535 CC lib/ftl/ftl_sb.o 00:03:55.535 CC lib/ftl/ftl_l2p.o 00:03:55.535 CC lib/ftl/ftl_l2p_flat.o 00:03:55.535 CC lib/ftl/ftl_nv_cache.o 00:03:55.535 CC lib/ftl/ftl_band.o 00:03:55.535 CC lib/ftl/ftl_writer.o 00:03:55.535 CC lib/ftl/ftl_band_ops.o 00:03:55.535 CC lib/ftl/ftl_rq.o 00:03:55.535 CC lib/ftl/ftl_reloc.o 00:03:55.535 CC lib/ftl/ftl_p2l.o 00:03:55.535 CC lib/ftl/ftl_l2p_cache.o 00:03:55.535 CC lib/nbd/nbd.o 00:03:55.535 CC lib/nbd/nbd_rpc.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt.o 00:03:55.535 CC lib/ublk/ublk.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:55.535 CC lib/ublk/ublk_rpc.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:55.535 CC lib/ftl/utils/ftl_conf.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:55.535 CC lib/ftl/utils/ftl_md.o 00:03:55.535 CC lib/ftl/utils/ftl_bitmap.o 00:03:55.535 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:55.535 CC lib/ftl/utils/ftl_mempool.o 00:03:55.535 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:55.535 CC lib/ftl/utils/ftl_property.o 00:03:55.535 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:55.535 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:55.535 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:55.535 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:55.535 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:55.536 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:55.536 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:55.536 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:55.536 CC lib/ftl/base/ftl_base_dev.o 00:03:55.536 CC lib/ftl/base/ftl_base_bdev.o 00:03:55.536 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:55.536 CC lib/ftl/ftl_trace.o 00:03:55.794 LIB libspdk_nbd.a 00:03:56.053 SO libspdk_nbd.so.6.0 00:03:56.053 LIB libspdk_scsi.a 00:03:56.053 SYMLINK libspdk_nbd.so 00:03:56.053 SO libspdk_scsi.so.8.0 00:03:56.053 LIB libspdk_ublk.a 00:03:56.053 SYMLINK libspdk_scsi.so 00:03:56.053 SO libspdk_ublk.so.2.0 00:03:56.053 SYMLINK libspdk_ublk.so 00:03:56.313 CC lib/vhost/vhost_rpc.o 00:03:56.313 CC lib/vhost/vhost.o 00:03:56.313 CC lib/vhost/vhost_blk.o 00:03:56.313 LIB libspdk_ftl.a 00:03:56.313 CC lib/vhost/rte_vhost_user.o 00:03:56.313 CC lib/iscsi/conn.o 00:03:56.313 CC lib/iscsi/iscsi.o 00:03:56.313 CC lib/vhost/vhost_scsi.o 00:03:56.313 CC lib/iscsi/init_grp.o 00:03:56.313 CC lib/iscsi/param.o 00:03:56.313 CC lib/iscsi/md5.o 00:03:56.313 CC lib/iscsi/portal_grp.o 00:03:56.313 CC lib/iscsi/tgt_node.o 00:03:56.313 CC lib/iscsi/iscsi_subsystem.o 00:03:56.313 CC lib/iscsi/iscsi_rpc.o 00:03:56.313 CC lib/iscsi/task.o 00:03:56.313 SO libspdk_ftl.so.8.0 00:03:56.882 SYMLINK libspdk_ftl.so 00:03:57.141 LIB libspdk_nvmf.a 00:03:57.141 LIB libspdk_vhost.a 00:03:57.141 SO libspdk_nvmf.so.17.0 00:03:57.141 SO libspdk_vhost.so.7.1 00:03:57.141 SYMLINK libspdk_vhost.so 00:03:57.141 SYMLINK libspdk_nvmf.so 00:03:57.141 LIB libspdk_iscsi.a 00:03:57.400 SO libspdk_iscsi.so.7.0 00:03:57.400 SYMLINK libspdk_iscsi.so 00:03:57.659 CC module/vfu_device/vfu_virtio_scsi.o 00:03:57.659 CC module/vfu_device/vfu_virtio.o 00:03:57.659 CC module/vfu_device/vfu_virtio_blk.o 00:03:57.659 CC module/vfu_device/vfu_virtio_rpc.o 00:03:57.659 CC module/env_dpdk/env_dpdk_rpc.o 00:03:57.917 CC module/scheduler/gscheduler/gscheduler.o 00:03:57.917 CC module/blob/bdev/blob_bdev.o 00:03:57.917 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:57.917 CC module/accel/error/accel_error.o 00:03:57.917 CC module/accel/error/accel_error_rpc.o 00:03:57.917 CC module/sock/posix/posix.o 00:03:57.917 CC module/accel/dsa/accel_dsa.o 00:03:57.917 CC module/accel/dsa/accel_dsa_rpc.o 00:03:57.917 CC module/accel/ioat/accel_ioat.o 00:03:57.917 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:57.917 CC module/accel/ioat/accel_ioat_rpc.o 00:03:57.917 CC module/accel/iaa/accel_iaa_rpc.o 00:03:57.917 CC module/accel/iaa/accel_iaa.o 00:03:57.917 LIB libspdk_env_dpdk_rpc.a 00:03:57.917 SO libspdk_env_dpdk_rpc.so.5.0 00:03:57.917 SYMLINK libspdk_env_dpdk_rpc.so 00:03:57.917 LIB libspdk_scheduler_gscheduler.a 00:03:57.917 LIB libspdk_scheduler_dpdk_governor.a 00:03:57.917 SO libspdk_scheduler_gscheduler.so.3.0 00:03:57.917 LIB libspdk_accel_ioat.a 00:03:57.917 LIB libspdk_accel_error.a 00:03:57.917 LIB libspdk_scheduler_dynamic.a 00:03:57.917 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:58.175 LIB libspdk_accel_dsa.a 00:03:58.175 LIB libspdk_accel_iaa.a 00:03:58.175 SO libspdk_accel_error.so.1.0 00:03:58.175 SO libspdk_accel_ioat.so.5.0 00:03:58.175 SO libspdk_scheduler_dynamic.so.3.0 00:03:58.175 LIB libspdk_blob_bdev.a 00:03:58.175 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:58.175 SO libspdk_accel_dsa.so.4.0 00:03:58.175 SYMLINK libspdk_scheduler_gscheduler.so 00:03:58.175 SO libspdk_accel_iaa.so.2.0 00:03:58.175 SYMLINK libspdk_accel_ioat.so 00:03:58.175 SO libspdk_blob_bdev.so.10.1 00:03:58.175 SYMLINK libspdk_accel_error.so 00:03:58.175 SYMLINK libspdk_scheduler_dynamic.so 00:03:58.175 SYMLINK libspdk_accel_dsa.so 00:03:58.175 SYMLINK libspdk_accel_iaa.so 00:03:58.175 SYMLINK libspdk_blob_bdev.so 00:03:58.175 LIB libspdk_vfu_device.a 00:03:58.175 SO libspdk_vfu_device.so.2.0 00:03:58.434 SYMLINK libspdk_vfu_device.so 00:03:58.434 LIB libspdk_sock_posix.a 00:03:58.434 SO libspdk_sock_posix.so.5.0 00:03:58.434 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:58.434 CC module/bdev/passthru/vbdev_passthru.o 00:03:58.434 CC module/bdev/null/bdev_null.o 00:03:58.434 CC module/bdev/null/bdev_null_rpc.o 00:03:58.434 CC module/bdev/malloc/bdev_malloc.o 00:03:58.434 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:58.434 CC module/bdev/aio/bdev_aio.o 00:03:58.434 CC module/bdev/aio/bdev_aio_rpc.o 00:03:58.434 CC module/bdev/error/vbdev_error_rpc.o 00:03:58.434 CC module/bdev/error/vbdev_error.o 00:03:58.434 CC module/bdev/lvol/vbdev_lvol.o 00:03:58.434 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:58.434 CC module/bdev/gpt/gpt.o 00:03:58.434 CC module/bdev/gpt/vbdev_gpt.o 00:03:58.434 CC module/bdev/delay/vbdev_delay.o 00:03:58.434 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:58.434 CC module/bdev/nvme/bdev_nvme.o 00:03:58.434 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:58.434 CC module/bdev/nvme/nvme_rpc.o 00:03:58.434 CC module/blobfs/bdev/blobfs_bdev.o 00:03:58.434 CC module/bdev/nvme/bdev_mdns_client.o 00:03:58.434 CC module/bdev/nvme/vbdev_opal.o 00:03:58.434 CC module/bdev/iscsi/bdev_iscsi.o 00:03:58.434 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:58.434 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:58.434 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:58.434 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:58.434 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:58.434 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:58.434 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:58.434 CC module/bdev/split/vbdev_split.o 00:03:58.434 CC module/bdev/split/vbdev_split_rpc.o 00:03:58.434 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:58.434 CC module/bdev/ftl/bdev_ftl.o 00:03:58.434 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:58.434 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:58.434 CC module/bdev/raid/bdev_raid_rpc.o 00:03:58.434 CC module/bdev/raid/bdev_raid.o 00:03:58.434 CC module/bdev/raid/raid0.o 00:03:58.434 CC module/bdev/raid/bdev_raid_sb.o 00:03:58.434 CC module/bdev/raid/raid1.o 00:03:58.434 CC module/bdev/raid/concat.o 00:03:58.434 SYMLINK libspdk_sock_posix.so 00:03:58.692 LIB libspdk_blobfs_bdev.a 00:03:58.692 SO libspdk_blobfs_bdev.so.5.0 00:03:58.692 LIB libspdk_bdev_null.a 00:03:58.692 LIB libspdk_bdev_gpt.a 00:03:58.692 LIB libspdk_bdev_split.a 00:03:58.692 LIB libspdk_bdev_passthru.a 00:03:58.692 SO libspdk_bdev_null.so.5.0 00:03:58.692 LIB libspdk_bdev_error.a 00:03:58.692 SO libspdk_bdev_gpt.so.5.0 00:03:58.692 SO libspdk_bdev_split.so.5.0 00:03:58.692 SYMLINK libspdk_blobfs_bdev.so 00:03:58.693 LIB libspdk_bdev_ftl.a 00:03:58.693 SO libspdk_bdev_passthru.so.5.0 00:03:58.693 LIB libspdk_bdev_malloc.a 00:03:58.693 SO libspdk_bdev_error.so.5.0 00:03:58.951 SYMLINK libspdk_bdev_null.so 00:03:58.951 SO libspdk_bdev_ftl.so.5.0 00:03:58.951 LIB libspdk_bdev_aio.a 00:03:58.951 LIB libspdk_bdev_iscsi.a 00:03:58.951 SYMLINK libspdk_bdev_gpt.so 00:03:58.951 LIB libspdk_bdev_zone_block.a 00:03:58.951 SYMLINK libspdk_bdev_split.so 00:03:58.951 SO libspdk_bdev_malloc.so.5.0 00:03:58.951 SO libspdk_bdev_zone_block.so.5.0 00:03:58.951 SO libspdk_bdev_aio.so.5.0 00:03:58.951 SYMLINK libspdk_bdev_error.so 00:03:58.951 SYMLINK libspdk_bdev_passthru.so 00:03:58.951 LIB libspdk_bdev_delay.a 00:03:58.951 SO libspdk_bdev_iscsi.so.5.0 00:03:58.951 SYMLINK libspdk_bdev_ftl.so 00:03:58.951 LIB libspdk_bdev_lvol.a 00:03:58.951 SO libspdk_bdev_delay.so.5.0 00:03:58.951 SYMLINK libspdk_bdev_malloc.so 00:03:58.951 SYMLINK libspdk_bdev_zone_block.so 00:03:58.951 SYMLINK libspdk_bdev_aio.so 00:03:58.951 SYMLINK libspdk_bdev_iscsi.so 00:03:58.951 SO libspdk_bdev_lvol.so.5.0 00:03:58.951 LIB libspdk_bdev_virtio.a 00:03:58.951 SYMLINK libspdk_bdev_delay.so 00:03:58.951 SYMLINK libspdk_bdev_lvol.so 00:03:58.951 SO libspdk_bdev_virtio.so.5.0 00:03:58.951 SYMLINK libspdk_bdev_virtio.so 00:03:59.210 LIB libspdk_bdev_raid.a 00:03:59.210 SO libspdk_bdev_raid.so.5.0 00:03:59.468 SYMLINK libspdk_bdev_raid.so 00:04:00.035 LIB libspdk_bdev_nvme.a 00:04:00.035 SO libspdk_bdev_nvme.so.6.0 00:04:00.293 SYMLINK libspdk_bdev_nvme.so 00:04:00.552 CC module/event/subsystems/scheduler/scheduler.o 00:04:00.552 CC module/event/subsystems/iobuf/iobuf.o 00:04:00.552 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:00.552 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:00.552 CC module/event/subsystems/vmd/vmd.o 00:04:00.552 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:00.552 CC module/event/subsystems/sock/sock.o 00:04:00.552 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:00.827 LIB libspdk_event_scheduler.a 00:04:00.827 LIB libspdk_event_sock.a 00:04:00.827 LIB libspdk_event_vfu_tgt.a 00:04:00.827 LIB libspdk_event_vmd.a 00:04:00.827 SO libspdk_event_scheduler.so.3.0 00:04:00.827 LIB libspdk_event_iobuf.a 00:04:00.827 LIB libspdk_event_vhost_blk.a 00:04:00.827 SO libspdk_event_vfu_tgt.so.2.0 00:04:00.827 SO libspdk_event_sock.so.4.0 00:04:00.827 SO libspdk_event_iobuf.so.2.0 00:04:00.827 SO libspdk_event_vmd.so.5.0 00:04:00.827 SO libspdk_event_vhost_blk.so.2.0 00:04:00.827 SYMLINK libspdk_event_scheduler.so 00:04:00.827 SYMLINK libspdk_event_vfu_tgt.so 00:04:00.827 SYMLINK libspdk_event_sock.so 00:04:00.827 SYMLINK libspdk_event_iobuf.so 00:04:00.827 SYMLINK libspdk_event_vhost_blk.so 00:04:00.827 SYMLINK libspdk_event_vmd.so 00:04:01.099 CC module/event/subsystems/accel/accel.o 00:04:01.099 LIB libspdk_event_accel.a 00:04:01.099 SO libspdk_event_accel.so.5.0 00:04:01.099 SYMLINK libspdk_event_accel.so 00:04:01.356 CC module/event/subsystems/bdev/bdev.o 00:04:01.613 LIB libspdk_event_bdev.a 00:04:01.613 SO libspdk_event_bdev.so.5.0 00:04:01.613 SYMLINK libspdk_event_bdev.so 00:04:01.871 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:01.871 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:01.871 CC module/event/subsystems/scsi/scsi.o 00:04:01.871 CC module/event/subsystems/ublk/ublk.o 00:04:01.871 CC module/event/subsystems/nbd/nbd.o 00:04:01.871 LIB libspdk_event_ublk.a 00:04:01.871 LIB libspdk_event_scsi.a 00:04:01.871 LIB libspdk_event_nbd.a 00:04:01.871 SO libspdk_event_ublk.so.2.0 00:04:01.871 SO libspdk_event_scsi.so.5.0 00:04:02.130 SO libspdk_event_nbd.so.5.0 00:04:02.130 LIB libspdk_event_nvmf.a 00:04:02.130 SYMLINK libspdk_event_ublk.so 00:04:02.130 SO libspdk_event_nvmf.so.5.0 00:04:02.130 SYMLINK libspdk_event_scsi.so 00:04:02.130 SYMLINK libspdk_event_nbd.so 00:04:02.130 SYMLINK libspdk_event_nvmf.so 00:04:02.130 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:02.130 CC module/event/subsystems/iscsi/iscsi.o 00:04:02.389 LIB libspdk_event_vhost_scsi.a 00:04:02.389 SO libspdk_event_vhost_scsi.so.2.0 00:04:02.389 LIB libspdk_event_iscsi.a 00:04:02.389 SO libspdk_event_iscsi.so.5.0 00:04:02.389 SYMLINK libspdk_event_vhost_scsi.so 00:04:02.389 SYMLINK libspdk_event_iscsi.so 00:04:02.647 SO libspdk.so.5.0 00:04:02.647 SYMLINK libspdk.so 00:04:02.908 CC app/spdk_nvme_discover/discovery_aer.o 00:04:02.908 CXX app/trace/trace.o 00:04:02.908 CC app/trace_record/trace_record.o 00:04:02.908 CC app/spdk_nvme_perf/perf.o 00:04:02.908 CC app/spdk_lspci/spdk_lspci.o 00:04:02.908 CC test/rpc_client/rpc_client_test.o 00:04:02.908 CC app/spdk_top/spdk_top.o 00:04:02.908 TEST_HEADER include/spdk/accel.h 00:04:02.908 CC app/spdk_nvme_identify/identify.o 00:04:02.908 TEST_HEADER include/spdk/accel_module.h 00:04:02.908 TEST_HEADER include/spdk/assert.h 00:04:02.908 TEST_HEADER include/spdk/base64.h 00:04:02.908 TEST_HEADER include/spdk/bdev.h 00:04:02.908 TEST_HEADER include/spdk/barrier.h 00:04:02.908 TEST_HEADER include/spdk/bdev_module.h 00:04:02.908 TEST_HEADER include/spdk/bdev_zone.h 00:04:02.908 TEST_HEADER include/spdk/bit_array.h 00:04:02.908 TEST_HEADER include/spdk/bit_pool.h 00:04:02.908 TEST_HEADER include/spdk/blob_bdev.h 00:04:02.908 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:02.908 TEST_HEADER include/spdk/blobfs.h 00:04:02.908 TEST_HEADER include/spdk/blob.h 00:04:02.908 TEST_HEADER include/spdk/conf.h 00:04:02.908 TEST_HEADER include/spdk/cpuset.h 00:04:02.908 TEST_HEADER include/spdk/config.h 00:04:02.908 TEST_HEADER include/spdk/crc32.h 00:04:02.908 TEST_HEADER include/spdk/crc16.h 00:04:02.908 TEST_HEADER include/spdk/crc64.h 00:04:02.908 TEST_HEADER include/spdk/dif.h 00:04:02.908 TEST_HEADER include/spdk/dma.h 00:04:02.908 TEST_HEADER include/spdk/env_dpdk.h 00:04:02.908 TEST_HEADER include/spdk/endian.h 00:04:02.908 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:02.908 TEST_HEADER include/spdk/event.h 00:04:02.908 TEST_HEADER include/spdk/env.h 00:04:02.908 TEST_HEADER include/spdk/fd_group.h 00:04:02.908 TEST_HEADER include/spdk/fd.h 00:04:02.908 TEST_HEADER include/spdk/file.h 00:04:02.908 TEST_HEADER include/spdk/gpt_spec.h 00:04:02.908 TEST_HEADER include/spdk/ftl.h 00:04:02.908 TEST_HEADER include/spdk/histogram_data.h 00:04:02.909 TEST_HEADER include/spdk/hexlify.h 00:04:02.909 TEST_HEADER include/spdk/idxd_spec.h 00:04:02.909 TEST_HEADER include/spdk/idxd.h 00:04:02.909 TEST_HEADER include/spdk/init.h 00:04:02.909 TEST_HEADER include/spdk/ioat.h 00:04:02.909 TEST_HEADER include/spdk/ioat_spec.h 00:04:02.909 TEST_HEADER include/spdk/json.h 00:04:02.909 TEST_HEADER include/spdk/iscsi_spec.h 00:04:02.909 TEST_HEADER include/spdk/likely.h 00:04:02.909 TEST_HEADER include/spdk/jsonrpc.h 00:04:02.909 TEST_HEADER include/spdk/log.h 00:04:02.909 TEST_HEADER include/spdk/lvol.h 00:04:02.909 TEST_HEADER include/spdk/mmio.h 00:04:02.909 CC app/nvmf_tgt/nvmf_main.o 00:04:02.909 TEST_HEADER include/spdk/nbd.h 00:04:02.909 TEST_HEADER include/spdk/memory.h 00:04:02.909 CC app/vhost/vhost.o 00:04:02.909 TEST_HEADER include/spdk/nvme.h 00:04:02.909 TEST_HEADER include/spdk/notify.h 00:04:02.909 TEST_HEADER include/spdk/nvme_intel.h 00:04:02.909 CC app/iscsi_tgt/iscsi_tgt.o 00:04:02.909 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:02.909 TEST_HEADER include/spdk/nvme_zns.h 00:04:02.909 TEST_HEADER include/spdk/nvme_spec.h 00:04:02.909 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:02.909 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:02.909 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:02.909 TEST_HEADER include/spdk/nvmf_spec.h 00:04:02.909 TEST_HEADER include/spdk/nvmf_transport.h 00:04:02.909 TEST_HEADER include/spdk/nvmf.h 00:04:02.909 TEST_HEADER include/spdk/opal_spec.h 00:04:02.909 CC app/spdk_dd/spdk_dd.o 00:04:02.909 TEST_HEADER include/spdk/pci_ids.h 00:04:02.909 TEST_HEADER include/spdk/opal.h 00:04:02.909 TEST_HEADER include/spdk/pipe.h 00:04:02.909 TEST_HEADER include/spdk/queue.h 00:04:02.909 TEST_HEADER include/spdk/reduce.h 00:04:02.909 TEST_HEADER include/spdk/rpc.h 00:04:02.909 TEST_HEADER include/spdk/scheduler.h 00:04:02.909 TEST_HEADER include/spdk/scsi.h 00:04:02.909 TEST_HEADER include/spdk/scsi_spec.h 00:04:02.909 TEST_HEADER include/spdk/stdinc.h 00:04:02.909 TEST_HEADER include/spdk/sock.h 00:04:02.909 TEST_HEADER include/spdk/string.h 00:04:02.909 TEST_HEADER include/spdk/thread.h 00:04:02.909 CC app/spdk_tgt/spdk_tgt.o 00:04:02.909 TEST_HEADER include/spdk/tree.h 00:04:02.909 TEST_HEADER include/spdk/trace.h 00:04:02.909 TEST_HEADER include/spdk/trace_parser.h 00:04:02.909 TEST_HEADER include/spdk/util.h 00:04:02.909 TEST_HEADER include/spdk/ublk.h 00:04:02.909 TEST_HEADER include/spdk/uuid.h 00:04:02.909 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:02.909 TEST_HEADER include/spdk/version.h 00:04:02.909 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:02.909 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:02.909 TEST_HEADER include/spdk/vhost.h 00:04:02.909 CC examples/sock/hello_world/hello_sock.o 00:04:02.909 TEST_HEADER include/spdk/vmd.h 00:04:02.909 TEST_HEADER include/spdk/xor.h 00:04:02.909 CC examples/nvme/hello_world/hello_world.o 00:04:02.909 TEST_HEADER include/spdk/zipf.h 00:04:02.909 CXX test/cpp_headers/accel.o 00:04:02.909 CXX test/cpp_headers/accel_module.o 00:04:02.909 CXX test/cpp_headers/assert.o 00:04:02.909 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:02.909 CC examples/nvme/abort/abort.o 00:04:02.909 CXX test/cpp_headers/barrier.o 00:04:02.909 CXX test/cpp_headers/base64.o 00:04:02.909 CC examples/util/zipf/zipf.o 00:04:02.909 CXX test/cpp_headers/bdev.o 00:04:02.909 CXX test/cpp_headers/bdev_module.o 00:04:02.909 CXX test/cpp_headers/bdev_zone.o 00:04:02.909 CC examples/nvme/reconnect/reconnect.o 00:04:02.909 CXX test/cpp_headers/bit_array.o 00:04:02.909 CXX test/cpp_headers/bit_pool.o 00:04:02.909 CC examples/nvme/hotplug/hotplug.o 00:04:02.909 CXX test/cpp_headers/blob_bdev.o 00:04:02.909 CXX test/cpp_headers/blobfs_bdev.o 00:04:02.909 CXX test/cpp_headers/blobfs.o 00:04:02.909 CXX test/cpp_headers/blob.o 00:04:02.909 CXX test/cpp_headers/conf.o 00:04:02.909 CXX test/cpp_headers/config.o 00:04:02.909 CXX test/cpp_headers/cpuset.o 00:04:02.909 CXX test/cpp_headers/crc16.o 00:04:02.909 CC examples/nvme/arbitration/arbitration.o 00:04:02.909 CC examples/ioat/perf/perf.o 00:04:02.909 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:02.909 CXX test/cpp_headers/crc32.o 00:04:02.909 CXX test/cpp_headers/crc64.o 00:04:02.909 CC examples/idxd/perf/perf.o 00:04:02.909 CXX test/cpp_headers/dif.o 00:04:02.909 CC examples/vmd/lsvmd/lsvmd.o 00:04:02.909 CC examples/ioat/verify/verify.o 00:04:02.909 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:02.909 CC examples/nvmf/nvmf/nvmf.o 00:04:02.909 CC test/thread/poller_perf/poller_perf.o 00:04:02.909 CC examples/accel/perf/accel_perf.o 00:04:02.909 CC examples/vmd/led/led.o 00:04:02.909 CC test/env/memory/memory_ut.o 00:04:02.909 CC examples/bdev/hello_world/hello_bdev.o 00:04:02.909 CC test/env/pci/pci_ut.o 00:04:02.909 CC test/nvme/compliance/nvme_compliance.o 00:04:02.909 CC app/fio/nvme/fio_plugin.o 00:04:02.909 CC test/env/vtophys/vtophys.o 00:04:02.909 CC test/nvme/startup/startup.o 00:04:02.909 CC test/app/histogram_perf/histogram_perf.o 00:04:02.909 CC test/nvme/err_injection/err_injection.o 00:04:02.909 CC test/nvme/simple_copy/simple_copy.o 00:04:02.909 CC examples/bdev/bdevperf/bdevperf.o 00:04:02.909 CC test/nvme/aer/aer.o 00:04:02.909 CC test/nvme/reset/reset.o 00:04:02.909 CC test/nvme/fdp/fdp.o 00:04:02.909 CC test/event/event_perf/event_perf.o 00:04:02.909 CC test/nvme/overhead/overhead.o 00:04:02.909 CC examples/blob/hello_world/hello_blob.o 00:04:02.909 CC test/event/reactor/reactor.o 00:04:02.909 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:02.909 CC test/nvme/connect_stress/connect_stress.o 00:04:02.909 CC test/nvme/boot_partition/boot_partition.o 00:04:02.909 CXX test/cpp_headers/dma.o 00:04:02.909 CC test/nvme/sgl/sgl.o 00:04:02.909 CC test/nvme/cuse/cuse.o 00:04:02.909 CC test/app/stub/stub.o 00:04:03.175 CC test/nvme/fused_ordering/fused_ordering.o 00:04:03.175 CC test/event/reactor_perf/reactor_perf.o 00:04:03.175 CC examples/blob/cli/blobcli.o 00:04:03.175 CC test/nvme/reserve/reserve.o 00:04:03.175 CC test/nvme/e2edp/nvme_dp.o 00:04:03.175 CC app/fio/bdev/fio_plugin.o 00:04:03.175 CC test/event/app_repeat/app_repeat.o 00:04:03.175 CC test/app/jsoncat/jsoncat.o 00:04:03.175 CC examples/thread/thread/thread_ex.o 00:04:03.175 CC test/dma/test_dma/test_dma.o 00:04:03.175 CC test/accel/dif/dif.o 00:04:03.175 CC test/bdev/bdevio/bdevio.o 00:04:03.175 CC test/app/bdev_svc/bdev_svc.o 00:04:03.175 CC test/blobfs/mkfs/mkfs.o 00:04:03.175 CC test/event/scheduler/scheduler.o 00:04:03.175 CC test/env/mem_callbacks/mem_callbacks.o 00:04:03.175 CC test/lvol/esnap/esnap.o 00:04:03.175 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:03.175 LINK spdk_nvme_discover 00:04:03.440 LINK nvmf_tgt 00:04:03.440 LINK spdk_lspci 00:04:03.440 LINK vhost 00:04:03.440 LINK spdk_trace_record 00:04:03.440 LINK cmb_copy 00:04:03.440 LINK poller_perf 00:04:03.440 LINK env_dpdk_post_init 00:04:03.440 LINK iscsi_tgt 00:04:03.440 LINK led 00:04:03.440 LINK pmr_persistence 00:04:03.440 LINK vtophys 00:04:03.440 LINK interrupt_tgt 00:04:03.440 LINK startup 00:04:03.440 LINK reactor 00:04:03.440 LINK spdk_tgt 00:04:03.440 LINK rpc_client_test 00:04:03.440 LINK jsoncat 00:04:03.440 LINK app_repeat 00:04:03.440 LINK ioat_perf 00:04:03.440 LINK err_injection 00:04:03.440 LINK boot_partition 00:04:03.440 LINK doorbell_aers 00:04:03.440 LINK lsvmd 00:04:03.440 CXX test/cpp_headers/endian.o 00:04:03.440 CXX test/cpp_headers/env.o 00:04:03.440 LINK zipf 00:04:03.440 CXX test/cpp_headers/env_dpdk.o 00:04:03.440 CXX test/cpp_headers/event.o 00:04:03.440 CXX test/cpp_headers/fd_group.o 00:04:03.440 LINK reserve 00:04:03.440 LINK histogram_perf 00:04:03.440 LINK simple_copy 00:04:03.440 LINK reactor_perf 00:04:03.440 LINK hello_blob 00:04:03.440 LINK event_perf 00:04:03.440 LINK hello_bdev 00:04:03.440 LINK reset 00:04:03.703 LINK verify 00:04:03.703 LINK hello_world 00:04:03.703 LINK stub 00:04:03.703 LINK scheduler 00:04:03.703 LINK fused_ordering 00:04:03.703 LINK connect_stress 00:04:03.703 LINK idxd_perf 00:04:03.703 LINK hello_sock 00:04:03.703 LINK spdk_dd 00:04:03.703 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:03.703 LINK abort 00:04:03.703 LINK fdp 00:04:03.703 LINK hotplug 00:04:03.703 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:03.703 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:03.703 CXX test/cpp_headers/fd.o 00:04:03.703 LINK mkfs 00:04:03.703 LINK bdev_svc 00:04:03.703 CXX test/cpp_headers/file.o 00:04:03.703 CXX test/cpp_headers/ftl.o 00:04:03.703 CXX test/cpp_headers/gpt_spec.o 00:04:03.703 CXX test/cpp_headers/hexlify.o 00:04:03.703 CXX test/cpp_headers/histogram_data.o 00:04:03.703 CXX test/cpp_headers/idxd.o 00:04:03.703 CXX test/cpp_headers/idxd_spec.o 00:04:03.703 CXX test/cpp_headers/init.o 00:04:03.703 CXX test/cpp_headers/ioat.o 00:04:03.703 CXX test/cpp_headers/ioat_spec.o 00:04:03.703 CXX test/cpp_headers/iscsi_spec.o 00:04:03.703 CXX test/cpp_headers/json.o 00:04:03.703 CXX test/cpp_headers/jsonrpc.o 00:04:03.703 CXX test/cpp_headers/likely.o 00:04:03.703 CXX test/cpp_headers/log.o 00:04:03.703 CXX test/cpp_headers/lvol.o 00:04:03.703 CXX test/cpp_headers/memory.o 00:04:03.703 LINK pci_ut 00:04:03.703 CXX test/cpp_headers/mmio.o 00:04:03.703 CXX test/cpp_headers/nbd.o 00:04:03.703 CXX test/cpp_headers/notify.o 00:04:03.703 LINK sgl 00:04:03.703 LINK thread 00:04:03.703 CXX test/cpp_headers/nvme.o 00:04:03.703 LINK nvme_dp 00:04:03.703 CXX test/cpp_headers/nvme_intel.o 00:04:03.703 LINK arbitration 00:04:03.703 CXX test/cpp_headers/nvme_ocssd.o 00:04:03.703 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:03.703 LINK aer 00:04:03.703 CXX test/cpp_headers/nvme_zns.o 00:04:03.703 CXX test/cpp_headers/nvme_spec.o 00:04:03.703 LINK overhead 00:04:03.703 CXX test/cpp_headers/nvmf_cmd.o 00:04:03.703 LINK nvmf 00:04:03.703 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:03.703 LINK test_dma 00:04:03.703 CXX test/cpp_headers/nvmf_spec.o 00:04:03.703 CXX test/cpp_headers/nvmf_transport.o 00:04:03.703 CXX test/cpp_headers/nvmf.o 00:04:03.703 CXX test/cpp_headers/opal.o 00:04:03.703 CXX test/cpp_headers/pci_ids.o 00:04:03.703 CXX test/cpp_headers/opal_spec.o 00:04:03.703 CXX test/cpp_headers/queue.o 00:04:03.703 LINK nvme_compliance 00:04:03.703 CXX test/cpp_headers/pipe.o 00:04:03.703 CXX test/cpp_headers/rpc.o 00:04:03.703 CXX test/cpp_headers/reduce.o 00:04:03.703 CXX test/cpp_headers/scsi.o 00:04:03.703 CXX test/cpp_headers/scheduler.o 00:04:03.703 CXX test/cpp_headers/scsi_spec.o 00:04:03.703 CXX test/cpp_headers/sock.o 00:04:03.703 CXX test/cpp_headers/stdinc.o 00:04:03.703 LINK nvme_manage 00:04:03.703 CXX test/cpp_headers/string.o 00:04:03.703 CXX test/cpp_headers/thread.o 00:04:03.703 CXX test/cpp_headers/trace.o 00:04:03.964 CXX test/cpp_headers/trace_parser.o 00:04:03.964 LINK reconnect 00:04:03.964 CXX test/cpp_headers/tree.o 00:04:03.964 CXX test/cpp_headers/ublk.o 00:04:03.964 CXX test/cpp_headers/util.o 00:04:03.964 CXX test/cpp_headers/uuid.o 00:04:03.964 LINK spdk_trace 00:04:03.964 CXX test/cpp_headers/vfio_user_pci.o 00:04:03.964 CXX test/cpp_headers/version.o 00:04:03.964 CXX test/cpp_headers/vfio_user_spec.o 00:04:03.964 CXX test/cpp_headers/vhost.o 00:04:03.964 CXX test/cpp_headers/vmd.o 00:04:03.964 CXX test/cpp_headers/xor.o 00:04:03.964 CXX test/cpp_headers/zipf.o 00:04:03.964 LINK bdevio 00:04:03.964 LINK blobcli 00:04:03.964 LINK spdk_bdev 00:04:03.964 LINK nvme_fuzz 00:04:03.964 LINK spdk_nvme 00:04:03.964 LINK dif 00:04:03.964 LINK accel_perf 00:04:04.224 LINK mem_callbacks 00:04:04.224 LINK spdk_top 00:04:04.224 LINK spdk_nvme_identify 00:04:04.224 LINK bdevperf 00:04:04.224 LINK vhost_fuzz 00:04:04.224 LINK spdk_nvme_perf 00:04:04.224 LINK memory_ut 00:04:04.483 LINK cuse 00:04:05.052 LINK iscsi_fuzz 00:04:06.959 LINK esnap 00:04:07.219 00:04:07.219 real 0m29.547s 00:04:07.219 user 5m4.092s 00:04:07.219 sys 2m19.174s 00:04:07.219 13:34:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:07.219 13:34:09 -- common/autotest_common.sh@10 -- $ set +x 00:04:07.219 ************************************ 00:04:07.219 END TEST make 00:04:07.219 ************************************ 00:04:07.219 13:34:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:07.219 13:34:09 -- nvmf/common.sh@7 -- # uname -s 00:04:07.219 13:34:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.219 13:34:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.219 13:34:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.219 13:34:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.219 13:34:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.219 13:34:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.219 13:34:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.219 13:34:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.219 13:34:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.219 13:34:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.219 13:34:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:07.219 13:34:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:07.219 13:34:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.219 13:34:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.219 13:34:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:07.219 13:34:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:07.219 13:34:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.219 13:34:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.219 13:34:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.219 13:34:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.219 13:34:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.219 13:34:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.219 13:34:09 -- paths/export.sh@5 -- # export PATH 00:04:07.219 13:34:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.219 13:34:09 -- nvmf/common.sh@46 -- # : 0 00:04:07.219 13:34:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:07.219 13:34:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:07.219 13:34:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:07.219 13:34:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.219 13:34:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.219 13:34:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:07.219 13:34:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:07.219 13:34:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:07.219 13:34:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:07.219 13:34:09 -- spdk/autotest.sh@32 -- # uname -s 00:04:07.219 13:34:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:07.219 13:34:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:07.219 13:34:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:07.219 13:34:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:07.219 13:34:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:07.219 13:34:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:07.219 13:34:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:07.219 13:34:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:07.219 13:34:09 -- spdk/autotest.sh@48 -- # udevadm_pid=1370548 00:04:07.219 13:34:09 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:07.219 13:34:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:07.219 13:34:09 -- spdk/autotest.sh@54 -- # echo 1370550 00:04:07.219 13:34:09 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:07.219 13:34:09 -- spdk/autotest.sh@56 -- # echo 1370551 00:04:07.219 13:34:09 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:07.219 13:34:09 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:04:07.219 13:34:09 -- spdk/autotest.sh@60 -- # echo 1370552 00:04:07.219 13:34:09 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:04:07.219 13:34:09 -- spdk/autotest.sh@62 -- # echo 1370553 00:04:07.219 13:34:09 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:04:07.219 13:34:09 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:07.219 13:34:09 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:07.219 13:34:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:07.219 13:34:09 -- common/autotest_common.sh@10 -- # set +x 00:04:07.219 13:34:09 -- spdk/autotest.sh@70 -- # create_test_list 00:04:07.219 13:34:09 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:07.219 13:34:09 -- common/autotest_common.sh@10 -- # set +x 00:04:07.219 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:04:07.219 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:04:07.219 13:34:09 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:07.219 13:34:09 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.219 13:34:09 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.219 13:34:09 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:07.219 13:34:09 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.219 13:34:09 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:07.219 13:34:09 -- common/autotest_common.sh@1440 -- # uname 00:04:07.219 13:34:09 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:07.219 13:34:09 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:07.219 13:34:09 -- common/autotest_common.sh@1460 -- # uname 00:04:07.219 13:34:09 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:07.219 13:34:09 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:07.219 13:34:09 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:07.219 13:34:09 -- spdk/autotest.sh@83 -- # hash lcov 00:04:07.219 13:34:09 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:07.219 13:34:09 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:07.219 --rc lcov_branch_coverage=1 00:04:07.219 --rc lcov_function_coverage=1 00:04:07.219 --rc genhtml_branch_coverage=1 00:04:07.219 --rc genhtml_function_coverage=1 00:04:07.219 --rc genhtml_legend=1 00:04:07.219 --rc geninfo_all_blocks=1 00:04:07.219 ' 00:04:07.478 13:34:09 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:07.478 --rc lcov_branch_coverage=1 00:04:07.478 --rc lcov_function_coverage=1 00:04:07.478 --rc genhtml_branch_coverage=1 00:04:07.478 --rc genhtml_function_coverage=1 00:04:07.478 --rc genhtml_legend=1 00:04:07.478 --rc geninfo_all_blocks=1 00:04:07.478 ' 00:04:07.478 13:34:09 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:07.478 --rc lcov_branch_coverage=1 00:04:07.478 --rc lcov_function_coverage=1 00:04:07.478 --rc genhtml_branch_coverage=1 00:04:07.478 --rc genhtml_function_coverage=1 00:04:07.478 --rc genhtml_legend=1 00:04:07.478 --rc geninfo_all_blocks=1 00:04:07.478 --no-external' 00:04:07.478 13:34:09 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:07.478 --rc lcov_branch_coverage=1 00:04:07.478 --rc lcov_function_coverage=1 00:04:07.478 --rc genhtml_branch_coverage=1 00:04:07.478 --rc genhtml_function_coverage=1 00:04:07.478 --rc genhtml_legend=1 00:04:07.478 --rc geninfo_all_blocks=1 00:04:07.478 --no-external' 00:04:07.479 13:34:09 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:07.479 lcov: LCOV version 1.14 00:04:07.479 13:34:09 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:08.858 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:08.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:08.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:08.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:09.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:09.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:09.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:09.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:09.120 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:21.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:21.336 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:21.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:21.336 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:21.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:21.336 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:31.330 13:34:32 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:31.330 13:34:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:31.330 13:34:32 -- common/autotest_common.sh@10 -- # set +x 00:04:31.330 13:34:32 -- spdk/autotest.sh@102 -- # rm -f 00:04:31.330 13:34:32 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.235 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:33.235 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:33.235 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:33.235 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:33.235 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:33.235 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:33.235 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:33.235 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:33.494 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:33.494 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:33.494 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:33.494 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:33.494 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:33.494 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:33.494 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:33.494 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:33.494 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:33.494 13:34:35 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:33.494 13:34:35 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:33.494 13:34:35 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:33.494 13:34:35 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:33.494 13:34:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:33.494 13:34:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:33.494 13:34:35 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:33.494 13:34:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:33.494 13:34:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:33.494 13:34:35 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:33.494 13:34:35 -- spdk/autotest.sh@121 -- # grep -v p 00:04:33.494 13:34:35 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:33.494 13:34:35 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:33.494 13:34:35 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:33.494 13:34:35 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:33.494 13:34:35 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:33.494 13:34:35 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:33.494 No valid GPT data, bailing 00:04:33.494 13:34:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:33.494 13:34:35 -- scripts/common.sh@393 -- # pt= 00:04:33.494 13:34:35 -- scripts/common.sh@394 -- # return 1 00:04:33.752 13:34:35 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:33.752 1+0 records in 00:04:33.752 1+0 records out 00:04:33.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00296231 s, 354 MB/s 00:04:33.752 13:34:35 -- spdk/autotest.sh@129 -- # sync 00:04:33.752 13:34:35 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:33.752 13:34:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:33.752 13:34:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:39.034 13:34:40 -- spdk/autotest.sh@135 -- # uname -s 00:04:39.034 13:34:40 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:39.034 13:34:40 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:39.034 13:34:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.034 13:34:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.034 13:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:39.034 ************************************ 00:04:39.034 START TEST setup.sh 00:04:39.034 ************************************ 00:04:39.034 13:34:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:39.034 * Looking for test storage... 00:04:39.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.034 13:34:40 -- setup/test-setup.sh@10 -- # uname -s 00:04:39.034 13:34:40 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:39.034 13:34:40 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:39.034 13:34:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.034 13:34:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.034 13:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:39.034 ************************************ 00:04:39.034 START TEST acl 00:04:39.034 ************************************ 00:04:39.034 13:34:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:39.034 * Looking for test storage... 00:04:39.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.034 13:34:40 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:39.034 13:34:40 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:39.034 13:34:40 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:39.034 13:34:40 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:39.034 13:34:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:39.034 13:34:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:39.034 13:34:40 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:39.034 13:34:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.034 13:34:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:39.034 13:34:40 -- setup/acl.sh@12 -- # devs=() 00:04:39.034 13:34:40 -- setup/acl.sh@12 -- # declare -a devs 00:04:39.034 13:34:40 -- setup/acl.sh@13 -- # drivers=() 00:04:39.034 13:34:40 -- setup/acl.sh@13 -- # declare -A drivers 00:04:39.034 13:34:40 -- setup/acl.sh@51 -- # setup reset 00:04:39.034 13:34:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.034 13:34:40 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.575 13:34:43 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:41.575 13:34:43 -- setup/acl.sh@16 -- # local dev driver 00:04:41.575 13:34:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.575 13:34:43 -- setup/acl.sh@15 -- # setup output status 00:04:41.575 13:34:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.575 13:34:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:44.114 Hugepages 00:04:44.114 node hugesize free / total 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # continue 00:04:44.114 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # continue 00:04:44.114 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # continue 00:04:44.114 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.114 00:04:44.114 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # continue 00:04:44.114 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:44.114 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.114 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.114 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:44.114 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.114 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.114 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:44.114 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.114 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.114 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:44.114 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.114 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.114 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.114 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:44.114 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.114 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:44.115 13:34:46 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:44.115 13:34:46 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:44.115 13:34:46 -- setup/acl.sh@20 -- # continue 00:04:44.115 13:34:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:44.115 13:34:46 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:44.115 13:34:46 -- setup/acl.sh@54 -- # run_test denied denied 00:04:44.115 13:34:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.115 13:34:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.115 13:34:46 -- common/autotest_common.sh@10 -- # set +x 00:04:44.115 ************************************ 00:04:44.115 START TEST denied 00:04:44.115 ************************************ 00:04:44.115 13:34:46 -- common/autotest_common.sh@1104 -- # denied 00:04:44.115 13:34:46 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:04:44.115 13:34:46 -- setup/acl.sh@38 -- # setup output config 00:04:44.115 13:34:46 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:04:44.115 13:34:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.115 13:34:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.657 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:04:46.657 13:34:48 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:04:46.657 13:34:49 -- setup/acl.sh@28 -- # local dev driver 00:04:46.657 13:34:49 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:46.657 13:34:49 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:04:46.657 13:34:49 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:04:46.657 13:34:49 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:46.657 13:34:49 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:46.657 13:34:49 -- setup/acl.sh@41 -- # setup reset 00:04:46.657 13:34:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.657 13:34:49 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.951 00:04:49.951 real 0m5.898s 00:04:49.951 user 0m1.708s 00:04:49.951 sys 0m3.210s 00:04:49.951 13:34:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.951 13:34:52 -- common/autotest_common.sh@10 -- # set +x 00:04:49.951 ************************************ 00:04:49.951 END TEST denied 00:04:49.951 ************************************ 00:04:49.951 13:34:52 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:49.951 13:34:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.951 13:34:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.951 13:34:52 -- common/autotest_common.sh@10 -- # set +x 00:04:49.951 ************************************ 00:04:49.951 START TEST allowed 00:04:49.951 ************************************ 00:04:49.951 13:34:52 -- common/autotest_common.sh@1104 -- # allowed 00:04:49.951 13:34:52 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:04:49.951 13:34:52 -- setup/acl.sh@45 -- # setup output config 00:04:49.951 13:34:52 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:04:49.951 13:34:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.951 13:34:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.147 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:54.147 13:34:55 -- setup/acl.sh@47 -- # verify 00:04:54.147 13:34:55 -- setup/acl.sh@28 -- # local dev driver 00:04:54.147 13:34:55 -- setup/acl.sh@48 -- # setup reset 00:04:54.147 13:34:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.147 13:34:55 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.695 00:04:56.695 real 0m6.329s 00:04:56.695 user 0m1.916s 00:04:56.696 sys 0m3.495s 00:04:56.696 13:34:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.696 13:34:58 -- common/autotest_common.sh@10 -- # set +x 00:04:56.696 ************************************ 00:04:56.696 END TEST allowed 00:04:56.696 ************************************ 00:04:56.696 00:04:56.696 real 0m18.136s 00:04:56.696 user 0m5.890s 00:04:56.696 sys 0m10.565s 00:04:56.696 13:34:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.696 13:34:58 -- common/autotest_common.sh@10 -- # set +x 00:04:56.696 ************************************ 00:04:56.696 END TEST acl 00:04:56.696 ************************************ 00:04:56.696 13:34:58 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:56.696 13:34:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.696 13:34:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.696 13:34:58 -- common/autotest_common.sh@10 -- # set +x 00:04:56.696 ************************************ 00:04:56.696 START TEST hugepages 00:04:56.696 ************************************ 00:04:56.696 13:34:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:56.696 * Looking for test storage... 00:04:56.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:56.696 13:34:58 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:56.696 13:34:58 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:56.696 13:34:58 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:56.696 13:34:58 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:56.696 13:34:58 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:56.696 13:34:58 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:56.696 13:34:58 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:56.696 13:34:58 -- setup/common.sh@18 -- # local node= 00:04:56.696 13:34:58 -- setup/common.sh@19 -- # local var val 00:04:56.696 13:34:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.696 13:34:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.696 13:34:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.696 13:34:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.696 13:34:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.696 13:34:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 171716852 kB' 'MemAvailable: 174600180 kB' 'Buffers: 3896 kB' 'Cached: 11767352 kB' 'SwapCached: 0 kB' 'Active: 8771240 kB' 'Inactive: 3507440 kB' 'Active(anon): 8375756 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510740 kB' 'Mapped: 216864 kB' 'Shmem: 7868324 kB' 'KReclaimable: 249708 kB' 'Slab: 839344 kB' 'SReclaimable: 249708 kB' 'SUnreclaim: 589636 kB' 'KernelStack: 20608 kB' 'PageTables: 9416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982040 kB' 'Committed_AS: 9939464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314904 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.696 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.696 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # continue 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.697 13:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.697 13:34:58 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:56.697 13:34:58 -- setup/common.sh@33 -- # echo 2048 00:04:56.697 13:34:58 -- setup/common.sh@33 -- # return 0 00:04:56.697 13:34:58 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:56.697 13:34:58 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:56.697 13:34:58 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:56.697 13:34:58 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:56.697 13:34:58 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:56.697 13:34:58 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:56.697 13:34:58 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:56.697 13:34:58 -- setup/hugepages.sh@207 -- # get_nodes 00:04:56.697 13:34:58 -- setup/hugepages.sh@27 -- # local node 00:04:56.697 13:34:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.697 13:34:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:56.697 13:34:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.697 13:34:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:56.697 13:34:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.697 13:34:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.697 13:34:58 -- setup/hugepages.sh@208 -- # clear_hp 00:04:56.697 13:34:58 -- setup/hugepages.sh@37 -- # local node hp 00:04:56.697 13:34:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:56.697 13:34:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.697 13:34:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:56.697 13:34:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.697 13:34:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:56.697 13:34:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:56.697 13:34:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.697 13:34:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:56.697 13:34:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:56.697 13:34:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:56.697 13:34:58 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:56.697 13:34:58 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:56.697 13:34:58 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:56.697 13:34:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.697 13:34:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.697 13:34:58 -- common/autotest_common.sh@10 -- # set +x 00:04:56.697 ************************************ 00:04:56.697 START TEST default_setup 00:04:56.697 ************************************ 00:04:56.697 13:34:58 -- common/autotest_common.sh@1104 -- # default_setup 00:04:56.697 13:34:58 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:56.697 13:34:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.697 13:34:58 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:56.697 13:34:58 -- setup/hugepages.sh@51 -- # shift 00:04:56.697 13:34:58 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:56.697 13:34:58 -- setup/hugepages.sh@52 -- # local node_ids 00:04:56.697 13:34:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.697 13:34:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.697 13:34:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:56.697 13:34:58 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:56.697 13:34:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.697 13:34:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.697 13:34:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.697 13:34:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.697 13:34:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.697 13:34:58 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:56.697 13:34:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:56.697 13:34:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:56.697 13:34:58 -- setup/hugepages.sh@73 -- # return 0 00:04:56.697 13:34:58 -- setup/hugepages.sh@137 -- # setup output 00:04:56.697 13:34:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.697 13:34:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.235 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:59.235 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:59.235 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:59.235 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:59.235 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:59.235 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:59.235 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:59.235 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.235 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:59.235 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:59.494 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:59.494 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:59.494 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:59.494 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:59.494 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:59.494 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:00.434 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:00.434 13:35:02 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:00.434 13:35:02 -- setup/hugepages.sh@89 -- # local node 00:05:00.434 13:35:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.434 13:35:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.434 13:35:02 -- setup/hugepages.sh@92 -- # local surp 00:05:00.434 13:35:02 -- setup/hugepages.sh@93 -- # local resv 00:05:00.434 13:35:02 -- setup/hugepages.sh@94 -- # local anon 00:05:00.434 13:35:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.434 13:35:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.434 13:35:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.434 13:35:02 -- setup/common.sh@18 -- # local node= 00:05:00.434 13:35:02 -- setup/common.sh@19 -- # local var val 00:05:00.434 13:35:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.434 13:35:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.434 13:35:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.434 13:35:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.434 13:35:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.434 13:35:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.434 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.434 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.434 13:35:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173856516 kB' 'MemAvailable: 176739856 kB' 'Buffers: 3896 kB' 'Cached: 11767468 kB' 'SwapCached: 0 kB' 'Active: 8788860 kB' 'Inactive: 3507440 kB' 'Active(anon): 8393376 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528020 kB' 'Mapped: 216012 kB' 'Shmem: 7868440 kB' 'KReclaimable: 249732 kB' 'Slab: 837948 kB' 'SReclaimable: 249732 kB' 'SUnreclaim: 588216 kB' 'KernelStack: 20752 kB' 'PageTables: 9728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9924624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315032 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:00.434 13:35:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.434 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.434 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.434 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.434 13:35:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.434 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.434 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.434 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.435 13:35:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.435 13:35:02 -- setup/common.sh@33 -- # echo 0 00:05:00.435 13:35:02 -- setup/common.sh@33 -- # return 0 00:05:00.435 13:35:02 -- setup/hugepages.sh@97 -- # anon=0 00:05:00.435 13:35:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.435 13:35:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.435 13:35:02 -- setup/common.sh@18 -- # local node= 00:05:00.435 13:35:02 -- setup/common.sh@19 -- # local var val 00:05:00.435 13:35:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.435 13:35:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.435 13:35:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.435 13:35:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.435 13:35:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.435 13:35:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.435 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173857612 kB' 'MemAvailable: 176740952 kB' 'Buffers: 3896 kB' 'Cached: 11767468 kB' 'SwapCached: 0 kB' 'Active: 8789444 kB' 'Inactive: 3507440 kB' 'Active(anon): 8393960 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528952 kB' 'Mapped: 216004 kB' 'Shmem: 7868440 kB' 'KReclaimable: 249732 kB' 'Slab: 837944 kB' 'SReclaimable: 249732 kB' 'SUnreclaim: 588212 kB' 'KernelStack: 20816 kB' 'PageTables: 9784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9924636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315032 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.436 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.436 13:35:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.437 13:35:02 -- setup/common.sh@33 -- # echo 0 00:05:00.437 13:35:02 -- setup/common.sh@33 -- # return 0 00:05:00.437 13:35:02 -- setup/hugepages.sh@99 -- # surp=0 00:05:00.437 13:35:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.437 13:35:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.437 13:35:02 -- setup/common.sh@18 -- # local node= 00:05:00.437 13:35:02 -- setup/common.sh@19 -- # local var val 00:05:00.437 13:35:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.437 13:35:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.437 13:35:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.437 13:35:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.437 13:35:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.437 13:35:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173858088 kB' 'MemAvailable: 176741428 kB' 'Buffers: 3896 kB' 'Cached: 11767484 kB' 'SwapCached: 0 kB' 'Active: 8788556 kB' 'Inactive: 3507440 kB' 'Active(anon): 8393072 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528008 kB' 'Mapped: 215936 kB' 'Shmem: 7868456 kB' 'KReclaimable: 249732 kB' 'Slab: 837968 kB' 'SReclaimable: 249732 kB' 'SUnreclaim: 588236 kB' 'KernelStack: 20656 kB' 'PageTables: 9680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9925992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315048 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.437 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.437 13:35:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.438 13:35:02 -- setup/common.sh@33 -- # echo 0 00:05:00.438 13:35:02 -- setup/common.sh@33 -- # return 0 00:05:00.438 13:35:02 -- setup/hugepages.sh@100 -- # resv=0 00:05:00.438 13:35:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.438 nr_hugepages=1024 00:05:00.438 13:35:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.438 resv_hugepages=0 00:05:00.438 13:35:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.438 surplus_hugepages=0 00:05:00.438 13:35:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.438 anon_hugepages=0 00:05:00.438 13:35:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.438 13:35:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.438 13:35:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.438 13:35:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.438 13:35:02 -- setup/common.sh@18 -- # local node= 00:05:00.438 13:35:02 -- setup/common.sh@19 -- # local var val 00:05:00.438 13:35:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.438 13:35:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.438 13:35:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.438 13:35:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.438 13:35:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.438 13:35:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173858012 kB' 'MemAvailable: 176741352 kB' 'Buffers: 3896 kB' 'Cached: 11767496 kB' 'SwapCached: 0 kB' 'Active: 8788312 kB' 'Inactive: 3507440 kB' 'Active(anon): 8392828 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527748 kB' 'Mapped: 215936 kB' 'Shmem: 7868468 kB' 'KReclaimable: 249732 kB' 'Slab: 837968 kB' 'SReclaimable: 249732 kB' 'SUnreclaim: 588236 kB' 'KernelStack: 20688 kB' 'PageTables: 9632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9926156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314968 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.438 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.438 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.439 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.439 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.440 13:35:02 -- setup/common.sh@33 -- # echo 1024 00:05:00.440 13:35:02 -- setup/common.sh@33 -- # return 0 00:05:00.440 13:35:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.440 13:35:02 -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.440 13:35:02 -- setup/hugepages.sh@27 -- # local node 00:05:00.440 13:35:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.440 13:35:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.440 13:35:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.440 13:35:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:00.440 13:35:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.440 13:35:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.440 13:35:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.440 13:35:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.440 13:35:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.440 13:35:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.440 13:35:02 -- setup/common.sh@18 -- # local node=0 00:05:00.440 13:35:02 -- setup/common.sh@19 -- # local var val 00:05:00.440 13:35:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.440 13:35:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.440 13:35:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.440 13:35:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.440 13:35:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.440 13:35:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85848992 kB' 'MemUsed: 11813692 kB' 'SwapCached: 0 kB' 'Active: 5268916 kB' 'Inactive: 3323000 kB' 'Active(anon): 5021092 kB' 'Inactive(anon): 0 kB' 'Active(file): 247824 kB' 'Inactive(file): 3323000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8454616 kB' 'Mapped: 117380 kB' 'AnonPages: 140484 kB' 'Shmem: 4883792 kB' 'KernelStack: 12184 kB' 'PageTables: 4712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104904 kB' 'Slab: 364816 kB' 'SReclaimable: 104904 kB' 'SUnreclaim: 259912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.440 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.440 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # continue 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.441 13:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.441 13:35:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.441 13:35:02 -- setup/common.sh@33 -- # echo 0 00:05:00.441 13:35:02 -- setup/common.sh@33 -- # return 0 00:05:00.441 13:35:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.441 13:35:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.441 13:35:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.441 13:35:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.441 13:35:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:00.441 node0=1024 expecting 1024 00:05:00.441 13:35:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:00.441 00:05:00.441 real 0m3.912s 00:05:00.441 user 0m1.268s 00:05:00.441 sys 0m1.903s 00:05:00.441 13:35:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.441 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:05:00.441 ************************************ 00:05:00.441 END TEST default_setup 00:05:00.441 ************************************ 00:05:00.441 13:35:02 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:00.441 13:35:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.441 13:35:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.441 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:05:00.441 ************************************ 00:05:00.441 START TEST per_node_1G_alloc 00:05:00.441 ************************************ 00:05:00.441 13:35:02 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:00.441 13:35:02 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:00.441 13:35:02 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:00.441 13:35:02 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:00.441 13:35:02 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:00.441 13:35:02 -- setup/hugepages.sh@51 -- # shift 00:05:00.441 13:35:02 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:00.441 13:35:02 -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.441 13:35:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.441 13:35:02 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:00.441 13:35:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:00.441 13:35:02 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:00.441 13:35:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.441 13:35:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:00.441 13:35:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:00.441 13:35:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.441 13:35:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.441 13:35:02 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:00.441 13:35:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.441 13:35:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:00.441 13:35:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.441 13:35:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:00.441 13:35:02 -- setup/hugepages.sh@73 -- # return 0 00:05:00.441 13:35:02 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:00.441 13:35:02 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:00.441 13:35:02 -- setup/hugepages.sh@146 -- # setup output 00:05:00.441 13:35:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.441 13:35:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.790 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.790 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.790 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.790 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.790 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.790 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.790 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.790 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.790 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.790 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.790 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.790 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.790 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.791 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.791 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.791 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.791 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.791 13:35:05 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:03.791 13:35:05 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:03.791 13:35:05 -- setup/hugepages.sh@89 -- # local node 00:05:03.791 13:35:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.791 13:35:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.791 13:35:05 -- setup/hugepages.sh@92 -- # local surp 00:05:03.791 13:35:05 -- setup/hugepages.sh@93 -- # local resv 00:05:03.791 13:35:05 -- setup/hugepages.sh@94 -- # local anon 00:05:03.791 13:35:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.791 13:35:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.791 13:35:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.791 13:35:05 -- setup/common.sh@18 -- # local node= 00:05:03.791 13:35:05 -- setup/common.sh@19 -- # local var val 00:05:03.791 13:35:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.791 13:35:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.791 13:35:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.791 13:35:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.791 13:35:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.791 13:35:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173864540 kB' 'MemAvailable: 176747880 kB' 'Buffers: 3896 kB' 'Cached: 11767576 kB' 'SwapCached: 0 kB' 'Active: 8788496 kB' 'Inactive: 3507440 kB' 'Active(anon): 8393012 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527748 kB' 'Mapped: 215952 kB' 'Shmem: 7868548 kB' 'KReclaimable: 249732 kB' 'Slab: 838284 kB' 'SReclaimable: 249732 kB' 'SUnreclaim: 588552 kB' 'KernelStack: 20560 kB' 'PageTables: 9296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9922112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315016 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.791 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.791 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.792 13:35:05 -- setup/common.sh@33 -- # echo 0 00:05:03.792 13:35:05 -- setup/common.sh@33 -- # return 0 00:05:03.792 13:35:05 -- setup/hugepages.sh@97 -- # anon=0 00:05:03.792 13:35:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.792 13:35:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.792 13:35:05 -- setup/common.sh@18 -- # local node= 00:05:03.792 13:35:05 -- setup/common.sh@19 -- # local var val 00:05:03.792 13:35:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.792 13:35:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.792 13:35:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.792 13:35:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.792 13:35:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.792 13:35:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173864152 kB' 'MemAvailable: 176747492 kB' 'Buffers: 3896 kB' 'Cached: 11767580 kB' 'SwapCached: 0 kB' 'Active: 8788052 kB' 'Inactive: 3507440 kB' 'Active(anon): 8392568 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527324 kB' 'Mapped: 215948 kB' 'Shmem: 7868552 kB' 'KReclaimable: 249732 kB' 'Slab: 838400 kB' 'SReclaimable: 249732 kB' 'SUnreclaim: 588668 kB' 'KernelStack: 20528 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9922124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315000 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.792 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.792 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.793 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.793 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.794 13:35:05 -- setup/common.sh@33 -- # echo 0 00:05:03.794 13:35:05 -- setup/common.sh@33 -- # return 0 00:05:03.794 13:35:05 -- setup/hugepages.sh@99 -- # surp=0 00:05:03.794 13:35:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.794 13:35:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.794 13:35:05 -- setup/common.sh@18 -- # local node= 00:05:03.794 13:35:05 -- setup/common.sh@19 -- # local var val 00:05:03.794 13:35:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.794 13:35:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.794 13:35:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.794 13:35:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.794 13:35:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.794 13:35:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173864656 kB' 'MemAvailable: 176747996 kB' 'Buffers: 3896 kB' 'Cached: 11767580 kB' 'SwapCached: 0 kB' 'Active: 8788052 kB' 'Inactive: 3507440 kB' 'Active(anon): 8392568 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527324 kB' 'Mapped: 215948 kB' 'Shmem: 7868552 kB' 'KReclaimable: 249732 kB' 'Slab: 838400 kB' 'SReclaimable: 249732 kB' 'SUnreclaim: 588668 kB' 'KernelStack: 20528 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9922136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315000 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.794 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.794 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.795 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.795 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.796 13:35:05 -- setup/common.sh@33 -- # echo 0 00:05:03.796 13:35:05 -- setup/common.sh@33 -- # return 0 00:05:03.796 13:35:05 -- setup/hugepages.sh@100 -- # resv=0 00:05:03.796 13:35:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.796 nr_hugepages=1024 00:05:03.796 13:35:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.796 resv_hugepages=0 00:05:03.796 13:35:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.796 surplus_hugepages=0 00:05:03.796 13:35:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.796 anon_hugepages=0 00:05:03.796 13:35:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.796 13:35:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.796 13:35:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.796 13:35:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.796 13:35:05 -- setup/common.sh@18 -- # local node= 00:05:03.796 13:35:05 -- setup/common.sh@19 -- # local var val 00:05:03.796 13:35:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.796 13:35:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.796 13:35:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.796 13:35:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.796 13:35:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.796 13:35:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173864656 kB' 'MemAvailable: 176747996 kB' 'Buffers: 3896 kB' 'Cached: 11767580 kB' 'SwapCached: 0 kB' 'Active: 8788052 kB' 'Inactive: 3507440 kB' 'Active(anon): 8392568 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527324 kB' 'Mapped: 215948 kB' 'Shmem: 7868552 kB' 'KReclaimable: 249732 kB' 'Slab: 838400 kB' 'SReclaimable: 249732 kB' 'SUnreclaim: 588668 kB' 'KernelStack: 20528 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9922152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315000 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.796 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.796 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.797 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.797 13:35:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.798 13:35:05 -- setup/common.sh@33 -- # echo 1024 00:05:03.798 13:35:05 -- setup/common.sh@33 -- # return 0 00:05:03.798 13:35:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.798 13:35:05 -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.798 13:35:05 -- setup/hugepages.sh@27 -- # local node 00:05:03.798 13:35:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.798 13:35:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.798 13:35:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.798 13:35:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.798 13:35:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.798 13:35:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.798 13:35:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.798 13:35:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.798 13:35:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.798 13:35:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.798 13:35:05 -- setup/common.sh@18 -- # local node=0 00:05:03.798 13:35:05 -- setup/common.sh@19 -- # local var val 00:05:03.798 13:35:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.798 13:35:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.798 13:35:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.798 13:35:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.798 13:35:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.798 13:35:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86918392 kB' 'MemUsed: 10744292 kB' 'SwapCached: 0 kB' 'Active: 5269084 kB' 'Inactive: 3323000 kB' 'Active(anon): 5021260 kB' 'Inactive(anon): 0 kB' 'Active(file): 247824 kB' 'Inactive(file): 3323000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8454648 kB' 'Mapped: 117384 kB' 'AnonPages: 140668 kB' 'Shmem: 4883824 kB' 'KernelStack: 11960 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104904 kB' 'Slab: 365480 kB' 'SReclaimable: 104904 kB' 'SUnreclaim: 260576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.798 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.798 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@33 -- # echo 0 00:05:03.799 13:35:05 -- setup/common.sh@33 -- # return 0 00:05:03.799 13:35:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.799 13:35:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.799 13:35:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.799 13:35:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:03.799 13:35:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.799 13:35:05 -- setup/common.sh@18 -- # local node=1 00:05:03.799 13:35:05 -- setup/common.sh@19 -- # local var val 00:05:03.799 13:35:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.799 13:35:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.799 13:35:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:03.799 13:35:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:03.799 13:35:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.799 13:35:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718492 kB' 'MemFree: 86946940 kB' 'MemUsed: 6771552 kB' 'SwapCached: 0 kB' 'Active: 3517868 kB' 'Inactive: 184440 kB' 'Active(anon): 3370208 kB' 'Inactive(anon): 0 kB' 'Active(file): 147660 kB' 'Inactive(file): 184440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3316884 kB' 'Mapped: 98328 kB' 'AnonPages: 385492 kB' 'Shmem: 2984784 kB' 'KernelStack: 8520 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 144828 kB' 'Slab: 472912 kB' 'SReclaimable: 144828 kB' 'SUnreclaim: 328084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.799 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.799 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # continue 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.800 13:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.800 13:35:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.800 13:35:05 -- setup/common.sh@33 -- # echo 0 00:05:03.800 13:35:05 -- setup/common.sh@33 -- # return 0 00:05:03.800 13:35:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.800 13:35:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.800 13:35:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.800 13:35:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.800 13:35:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:03.800 node0=512 expecting 512 00:05:03.800 13:35:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.800 13:35:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.800 13:35:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.800 13:35:05 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:03.800 node1=512 expecting 512 00:05:03.800 13:35:05 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:03.800 00:05:03.800 real 0m3.000s 00:05:03.800 user 0m1.255s 00:05:03.800 sys 0m1.813s 00:05:03.800 13:35:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.800 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:05:03.800 ************************************ 00:05:03.800 END TEST per_node_1G_alloc 00:05:03.801 ************************************ 00:05:03.801 13:35:05 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:03.801 13:35:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.801 13:35:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.801 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:05:03.801 ************************************ 00:05:03.801 START TEST even_2G_alloc 00:05:03.801 ************************************ 00:05:03.801 13:35:05 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:03.801 13:35:05 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:03.801 13:35:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:03.801 13:35:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.801 13:35:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.801 13:35:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:03.801 13:35:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.801 13:35:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:03.801 13:35:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.801 13:35:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:03.801 13:35:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:03.801 13:35:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.801 13:35:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.801 13:35:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.801 13:35:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.801 13:35:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.801 13:35:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:03.801 13:35:05 -- setup/hugepages.sh@83 -- # : 512 00:05:03.801 13:35:05 -- setup/hugepages.sh@84 -- # : 1 00:05:03.801 13:35:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.801 13:35:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:03.801 13:35:05 -- setup/hugepages.sh@83 -- # : 0 00:05:03.801 13:35:05 -- setup/hugepages.sh@84 -- # : 0 00:05:03.801 13:35:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.801 13:35:05 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:03.801 13:35:05 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:03.801 13:35:05 -- setup/hugepages.sh@153 -- # setup output 00:05:03.801 13:35:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.801 13:35:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.340 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:06.340 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:06.340 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:06.340 13:35:08 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:06.340 13:35:08 -- setup/hugepages.sh@89 -- # local node 00:05:06.340 13:35:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.340 13:35:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.340 13:35:08 -- setup/hugepages.sh@92 -- # local surp 00:05:06.340 13:35:08 -- setup/hugepages.sh@93 -- # local resv 00:05:06.340 13:35:08 -- setup/hugepages.sh@94 -- # local anon 00:05:06.340 13:35:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.340 13:35:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.340 13:35:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.340 13:35:08 -- setup/common.sh@18 -- # local node= 00:05:06.340 13:35:08 -- setup/common.sh@19 -- # local var val 00:05:06.340 13:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.340 13:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.340 13:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.341 13:35:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.341 13:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.341 13:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173880884 kB' 'MemAvailable: 176764224 kB' 'Buffers: 3896 kB' 'Cached: 11767696 kB' 'SwapCached: 0 kB' 'Active: 8786556 kB' 'Inactive: 3507440 kB' 'Active(anon): 8391072 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525140 kB' 'Mapped: 215056 kB' 'Shmem: 7868668 kB' 'KReclaimable: 249732 kB' 'Slab: 838076 kB' 'SReclaimable: 249732 kB' 'SUnreclaim: 588344 kB' 'KernelStack: 20496 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9910992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315080 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.341 13:35:08 -- setup/common.sh@33 -- # echo 0 00:05:06.341 13:35:08 -- setup/common.sh@33 -- # return 0 00:05:06.341 13:35:08 -- setup/hugepages.sh@97 -- # anon=0 00:05:06.341 13:35:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.341 13:35:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.341 13:35:08 -- setup/common.sh@18 -- # local node= 00:05:06.341 13:35:08 -- setup/common.sh@19 -- # local var val 00:05:06.341 13:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.341 13:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.341 13:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.341 13:35:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.341 13:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.341 13:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.341 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.341 13:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173883112 kB' 'MemAvailable: 176766452 kB' 'Buffers: 3896 kB' 'Cached: 11767696 kB' 'SwapCached: 0 kB' 'Active: 8786416 kB' 'Inactive: 3507440 kB' 'Active(anon): 8390932 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525488 kB' 'Mapped: 214884 kB' 'Shmem: 7868668 kB' 'KReclaimable: 249732 kB' 'Slab: 838112 kB' 'SReclaimable: 249732 kB' 'SUnreclaim: 588380 kB' 'KernelStack: 20480 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9911004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315048 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.341 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.342 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.342 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.342 13:35:08 -- setup/common.sh@33 -- # echo 0 00:05:06.342 13:35:08 -- setup/common.sh@33 -- # return 0 00:05:06.342 13:35:08 -- setup/hugepages.sh@99 -- # surp=0 00:05:06.342 13:35:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.343 13:35:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.343 13:35:08 -- setup/common.sh@18 -- # local node= 00:05:06.343 13:35:08 -- setup/common.sh@19 -- # local var val 00:05:06.343 13:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.343 13:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.343 13:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.343 13:35:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.343 13:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.343 13:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173883432 kB' 'MemAvailable: 176766772 kB' 'Buffers: 3896 kB' 'Cached: 11767708 kB' 'SwapCached: 0 kB' 'Active: 8786412 kB' 'Inactive: 3507440 kB' 'Active(anon): 8390928 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525488 kB' 'Mapped: 214884 kB' 'Shmem: 7868680 kB' 'KReclaimable: 249732 kB' 'Slab: 838112 kB' 'SReclaimable: 249732 kB' 'SUnreclaim: 588380 kB' 'KernelStack: 20480 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9911016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315048 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.343 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.343 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.344 13:35:08 -- setup/common.sh@33 -- # echo 0 00:05:06.344 13:35:08 -- setup/common.sh@33 -- # return 0 00:05:06.344 13:35:08 -- setup/hugepages.sh@100 -- # resv=0 00:05:06.344 13:35:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:06.344 nr_hugepages=1024 00:05:06.344 13:35:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.344 resv_hugepages=0 00:05:06.344 13:35:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.344 surplus_hugepages=0 00:05:06.344 13:35:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.344 anon_hugepages=0 00:05:06.344 13:35:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.344 13:35:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:06.344 13:35:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.344 13:35:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.344 13:35:08 -- setup/common.sh@18 -- # local node= 00:05:06.344 13:35:08 -- setup/common.sh@19 -- # local var val 00:05:06.344 13:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.344 13:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.344 13:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.344 13:35:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.344 13:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.344 13:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173883684 kB' 'MemAvailable: 176767024 kB' 'Buffers: 3896 kB' 'Cached: 11767724 kB' 'SwapCached: 0 kB' 'Active: 8786444 kB' 'Inactive: 3507440 kB' 'Active(anon): 8390960 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525488 kB' 'Mapped: 214884 kB' 'Shmem: 7868696 kB' 'KReclaimable: 249732 kB' 'Slab: 838112 kB' 'SReclaimable: 249732 kB' 'SUnreclaim: 588380 kB' 'KernelStack: 20480 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9911032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315048 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.344 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.344 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.345 13:35:08 -- setup/common.sh@33 -- # echo 1024 00:05:06.345 13:35:08 -- setup/common.sh@33 -- # return 0 00:05:06.345 13:35:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.345 13:35:08 -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.345 13:35:08 -- setup/hugepages.sh@27 -- # local node 00:05:06.345 13:35:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.345 13:35:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:06.345 13:35:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.345 13:35:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:06.345 13:35:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:06.345 13:35:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.345 13:35:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.345 13:35:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.345 13:35:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.345 13:35:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.345 13:35:08 -- setup/common.sh@18 -- # local node=0 00:05:06.345 13:35:08 -- setup/common.sh@19 -- # local var val 00:05:06.345 13:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.345 13:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.345 13:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.345 13:35:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.345 13:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.345 13:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86922960 kB' 'MemUsed: 10739724 kB' 'SwapCached: 0 kB' 'Active: 5268028 kB' 'Inactive: 3323000 kB' 'Active(anon): 5020204 kB' 'Inactive(anon): 0 kB' 'Active(file): 247824 kB' 'Inactive(file): 3323000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8454684 kB' 'Mapped: 116552 kB' 'AnonPages: 139464 kB' 'Shmem: 4883860 kB' 'KernelStack: 11928 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104904 kB' 'Slab: 365340 kB' 'SReclaimable: 104904 kB' 'SUnreclaim: 260436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.345 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.345 13:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@33 -- # echo 0 00:05:06.346 13:35:08 -- setup/common.sh@33 -- # return 0 00:05:06.346 13:35:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.346 13:35:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.346 13:35:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.346 13:35:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:06.346 13:35:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.346 13:35:08 -- setup/common.sh@18 -- # local node=1 00:05:06.346 13:35:08 -- setup/common.sh@19 -- # local var val 00:05:06.346 13:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.346 13:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.346 13:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:06.346 13:35:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:06.346 13:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.346 13:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718492 kB' 'MemFree: 86961524 kB' 'MemUsed: 6756968 kB' 'SwapCached: 0 kB' 'Active: 3518428 kB' 'Inactive: 184440 kB' 'Active(anon): 3370768 kB' 'Inactive(anon): 0 kB' 'Active(file): 147660 kB' 'Inactive(file): 184440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3316964 kB' 'Mapped: 98332 kB' 'AnonPages: 386012 kB' 'Shmem: 2984864 kB' 'KernelStack: 8552 kB' 'PageTables: 4916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 144828 kB' 'Slab: 472772 kB' 'SReclaimable: 144828 kB' 'SUnreclaim: 327944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.346 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.346 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.347 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.347 13:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.347 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.347 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.347 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.347 13:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.347 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.347 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.347 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.347 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.347 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.347 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.347 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.347 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.347 13:35:08 -- setup/common.sh@32 -- # continue 00:05:06.347 13:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.347 13:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.347 13:35:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.347 13:35:08 -- setup/common.sh@33 -- # echo 0 00:05:06.347 13:35:08 -- setup/common.sh@33 -- # return 0 00:05:06.347 13:35:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.347 13:35:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.347 13:35:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.347 13:35:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.347 13:35:08 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:06.347 node0=512 expecting 512 00:05:06.347 13:35:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.347 13:35:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.347 13:35:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.347 13:35:08 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:06.347 node1=512 expecting 512 00:05:06.347 13:35:08 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:06.347 00:05:06.347 real 0m2.730s 00:05:06.347 user 0m1.041s 00:05:06.347 sys 0m1.718s 00:05:06.347 13:35:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.347 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:05:06.347 ************************************ 00:05:06.347 END TEST even_2G_alloc 00:05:06.347 ************************************ 00:05:06.347 13:35:08 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:06.347 13:35:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.347 13:35:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.347 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:05:06.347 ************************************ 00:05:06.347 START TEST odd_alloc 00:05:06.347 ************************************ 00:05:06.347 13:35:08 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:06.347 13:35:08 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:06.347 13:35:08 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:06.347 13:35:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:06.347 13:35:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:06.347 13:35:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:06.347 13:35:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:06.347 13:35:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:06.347 13:35:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:06.347 13:35:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:06.347 13:35:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:06.347 13:35:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:06.347 13:35:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:06.347 13:35:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:06.347 13:35:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:06.347 13:35:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.347 13:35:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:06.347 13:35:08 -- setup/hugepages.sh@83 -- # : 513 00:05:06.347 13:35:08 -- setup/hugepages.sh@84 -- # : 1 00:05:06.347 13:35:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.347 13:35:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:06.347 13:35:08 -- setup/hugepages.sh@83 -- # : 0 00:05:06.347 13:35:08 -- setup/hugepages.sh@84 -- # : 0 00:05:06.347 13:35:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:06.347 13:35:08 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:06.347 13:35:08 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:06.347 13:35:08 -- setup/hugepages.sh@160 -- # setup output 00:05:06.347 13:35:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.347 13:35:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.886 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:08.886 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:08.886 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:09.148 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:09.148 13:35:11 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:09.148 13:35:11 -- setup/hugepages.sh@89 -- # local node 00:05:09.148 13:35:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.148 13:35:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.148 13:35:11 -- setup/hugepages.sh@92 -- # local surp 00:05:09.148 13:35:11 -- setup/hugepages.sh@93 -- # local resv 00:05:09.148 13:35:11 -- setup/hugepages.sh@94 -- # local anon 00:05:09.148 13:35:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.148 13:35:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.148 13:35:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.148 13:35:11 -- setup/common.sh@18 -- # local node= 00:05:09.148 13:35:11 -- setup/common.sh@19 -- # local var val 00:05:09.148 13:35:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.148 13:35:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.148 13:35:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.148 13:35:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.148 13:35:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.148 13:35:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173885136 kB' 'MemAvailable: 176768464 kB' 'Buffers: 3896 kB' 'Cached: 11767816 kB' 'SwapCached: 0 kB' 'Active: 8786816 kB' 'Inactive: 3507440 kB' 'Active(anon): 8391332 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525800 kB' 'Mapped: 214892 kB' 'Shmem: 7868788 kB' 'KReclaimable: 249708 kB' 'Slab: 838124 kB' 'SReclaimable: 249708 kB' 'SUnreclaim: 588416 kB' 'KernelStack: 20512 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 9911652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315080 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.148 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.148 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.149 13:35:11 -- setup/common.sh@33 -- # echo 0 00:05:09.149 13:35:11 -- setup/common.sh@33 -- # return 0 00:05:09.149 13:35:11 -- setup/hugepages.sh@97 -- # anon=0 00:05:09.149 13:35:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.149 13:35:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.149 13:35:11 -- setup/common.sh@18 -- # local node= 00:05:09.149 13:35:11 -- setup/common.sh@19 -- # local var val 00:05:09.149 13:35:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.149 13:35:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.149 13:35:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.149 13:35:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.149 13:35:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.149 13:35:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173886632 kB' 'MemAvailable: 176769928 kB' 'Buffers: 3896 kB' 'Cached: 11767820 kB' 'SwapCached: 0 kB' 'Active: 8786976 kB' 'Inactive: 3507440 kB' 'Active(anon): 8391492 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525560 kB' 'Mapped: 214968 kB' 'Shmem: 7868792 kB' 'KReclaimable: 249644 kB' 'Slab: 838076 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 588432 kB' 'KernelStack: 20496 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 9911664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315032 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.149 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.149 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.150 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.150 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.150 13:35:11 -- setup/common.sh@33 -- # echo 0 00:05:09.150 13:35:11 -- setup/common.sh@33 -- # return 0 00:05:09.150 13:35:11 -- setup/hugepages.sh@99 -- # surp=0 00:05:09.150 13:35:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.150 13:35:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.150 13:35:11 -- setup/common.sh@18 -- # local node= 00:05:09.150 13:35:11 -- setup/common.sh@19 -- # local var val 00:05:09.150 13:35:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.150 13:35:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.150 13:35:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.151 13:35:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.151 13:35:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.151 13:35:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.151 13:35:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173886716 kB' 'MemAvailable: 176770012 kB' 'Buffers: 3896 kB' 'Cached: 11767828 kB' 'SwapCached: 0 kB' 'Active: 8786492 kB' 'Inactive: 3507440 kB' 'Active(anon): 8391008 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525540 kB' 'Mapped: 214892 kB' 'Shmem: 7868800 kB' 'KReclaimable: 249644 kB' 'Slab: 838068 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 588424 kB' 'KernelStack: 20496 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 9911680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315048 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.151 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.151 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.152 13:35:11 -- setup/common.sh@33 -- # echo 0 00:05:09.152 13:35:11 -- setup/common.sh@33 -- # return 0 00:05:09.152 13:35:11 -- setup/hugepages.sh@100 -- # resv=0 00:05:09.152 13:35:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:09.152 nr_hugepages=1025 00:05:09.152 13:35:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.152 resv_hugepages=0 00:05:09.152 13:35:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.152 surplus_hugepages=0 00:05:09.152 13:35:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.152 anon_hugepages=0 00:05:09.152 13:35:11 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:09.152 13:35:11 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:09.152 13:35:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.152 13:35:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.152 13:35:11 -- setup/common.sh@18 -- # local node= 00:05:09.152 13:35:11 -- setup/common.sh@19 -- # local var val 00:05:09.152 13:35:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.152 13:35:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.152 13:35:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.152 13:35:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.152 13:35:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.152 13:35:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.152 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.152 13:35:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173886716 kB' 'MemAvailable: 176770012 kB' 'Buffers: 3896 kB' 'Cached: 11767840 kB' 'SwapCached: 0 kB' 'Active: 8786432 kB' 'Inactive: 3507440 kB' 'Active(anon): 8390948 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525444 kB' 'Mapped: 214892 kB' 'Shmem: 7868812 kB' 'KReclaimable: 249644 kB' 'Slab: 838068 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 588424 kB' 'KernelStack: 20480 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 9911692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315048 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:09.414 13:35:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.414 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 13:35:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.414 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 13:35:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.414 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 13:35:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.414 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 13:35:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.414 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.414 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.414 13:35:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.414 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.414 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.415 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.415 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.415 13:35:11 -- setup/common.sh@33 -- # echo 1025 00:05:09.415 13:35:11 -- setup/common.sh@33 -- # return 0 00:05:09.415 13:35:11 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:09.415 13:35:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.416 13:35:11 -- setup/hugepages.sh@27 -- # local node 00:05:09.416 13:35:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.416 13:35:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:09.416 13:35:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.416 13:35:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:09.416 13:35:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:09.416 13:35:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.416 13:35:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.416 13:35:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.416 13:35:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.416 13:35:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.416 13:35:11 -- setup/common.sh@18 -- # local node=0 00:05:09.416 13:35:11 -- setup/common.sh@19 -- # local var val 00:05:09.416 13:35:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.416 13:35:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.416 13:35:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.416 13:35:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.416 13:35:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.416 13:35:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86921036 kB' 'MemUsed: 10741648 kB' 'SwapCached: 0 kB' 'Active: 5268704 kB' 'Inactive: 3323000 kB' 'Active(anon): 5020880 kB' 'Inactive(anon): 0 kB' 'Active(file): 247824 kB' 'Inactive(file): 3323000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8454724 kB' 'Mapped: 116552 kB' 'AnonPages: 140208 kB' 'Shmem: 4883900 kB' 'KernelStack: 11944 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104880 kB' 'Slab: 365296 kB' 'SReclaimable: 104880 kB' 'SUnreclaim: 260416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.416 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.416 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@33 -- # echo 0 00:05:09.417 13:35:11 -- setup/common.sh@33 -- # return 0 00:05:09.417 13:35:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.417 13:35:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.417 13:35:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.417 13:35:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:09.417 13:35:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.417 13:35:11 -- setup/common.sh@18 -- # local node=1 00:05:09.417 13:35:11 -- setup/common.sh@19 -- # local var val 00:05:09.417 13:35:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:09.417 13:35:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.417 13:35:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:09.417 13:35:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:09.417 13:35:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.417 13:35:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718492 kB' 'MemFree: 86966204 kB' 'MemUsed: 6752288 kB' 'SwapCached: 0 kB' 'Active: 3517960 kB' 'Inactive: 184440 kB' 'Active(anon): 3370300 kB' 'Inactive(anon): 0 kB' 'Active(file): 147660 kB' 'Inactive(file): 184440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3317016 kB' 'Mapped: 98340 kB' 'AnonPages: 385464 kB' 'Shmem: 2984916 kB' 'KernelStack: 8552 kB' 'PageTables: 4816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 144764 kB' 'Slab: 472772 kB' 'SReclaimable: 144764 kB' 'SUnreclaim: 328008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.417 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.417 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.418 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.418 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.418 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.418 13:35:11 -- setup/common.sh@32 -- # continue 00:05:09.418 13:35:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:09.418 13:35:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:09.418 13:35:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.418 13:35:11 -- setup/common.sh@33 -- # echo 0 00:05:09.418 13:35:11 -- setup/common.sh@33 -- # return 0 00:05:09.418 13:35:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.418 13:35:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.418 13:35:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.418 13:35:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:09.418 node0=512 expecting 513 00:05:09.418 13:35:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.418 13:35:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.418 13:35:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.418 13:35:11 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:09.418 node1=513 expecting 512 00:05:09.418 13:35:11 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:09.418 00:05:09.418 real 0m2.998s 00:05:09.418 user 0m1.225s 00:05:09.418 sys 0m1.835s 00:05:09.418 13:35:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.418 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:05:09.418 ************************************ 00:05:09.418 END TEST odd_alloc 00:05:09.418 ************************************ 00:05:09.418 13:35:11 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:09.418 13:35:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.418 13:35:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.418 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:05:09.418 ************************************ 00:05:09.418 START TEST custom_alloc 00:05:09.418 ************************************ 00:05:09.418 13:35:11 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:09.418 13:35:11 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:09.418 13:35:11 -- setup/hugepages.sh@169 -- # local node 00:05:09.418 13:35:11 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:09.418 13:35:11 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:09.418 13:35:11 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:09.418 13:35:11 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:09.418 13:35:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:09.418 13:35:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:09.418 13:35:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:09.418 13:35:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:09.418 13:35:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.418 13:35:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:09.418 13:35:11 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:09.418 13:35:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.418 13:35:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.418 13:35:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:09.418 13:35:11 -- setup/hugepages.sh@83 -- # : 256 00:05:09.418 13:35:11 -- setup/hugepages.sh@84 -- # : 1 00:05:09.418 13:35:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:09.418 13:35:11 -- setup/hugepages.sh@83 -- # : 0 00:05:09.418 13:35:11 -- setup/hugepages.sh@84 -- # : 0 00:05:09.418 13:35:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:09.418 13:35:11 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:09.418 13:35:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:09.418 13:35:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:09.418 13:35:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:09.418 13:35:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:09.418 13:35:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.418 13:35:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:09.418 13:35:11 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:09.418 13:35:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.418 13:35:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.418 13:35:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:09.418 13:35:11 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:09.418 13:35:11 -- setup/hugepages.sh@78 -- # return 0 00:05:09.418 13:35:11 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:09.418 13:35:11 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:09.418 13:35:11 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:09.418 13:35:11 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:09.418 13:35:11 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:09.418 13:35:11 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:09.418 13:35:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:09.418 13:35:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.418 13:35:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:09.418 13:35:11 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:09.418 13:35:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.418 13:35:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.418 13:35:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:09.418 13:35:11 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:09.418 13:35:11 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:09.418 13:35:11 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:09.418 13:35:11 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:09.418 13:35:11 -- setup/hugepages.sh@78 -- # return 0 00:05:09.418 13:35:11 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:09.418 13:35:11 -- setup/hugepages.sh@187 -- # setup output 00:05:09.418 13:35:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.418 13:35:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.715 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:12.715 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:12.715 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:12.715 13:35:14 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:12.715 13:35:14 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:12.715 13:35:14 -- setup/hugepages.sh@89 -- # local node 00:05:12.715 13:35:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.715 13:35:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.715 13:35:14 -- setup/hugepages.sh@92 -- # local surp 00:05:12.715 13:35:14 -- setup/hugepages.sh@93 -- # local resv 00:05:12.715 13:35:14 -- setup/hugepages.sh@94 -- # local anon 00:05:12.715 13:35:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.715 13:35:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.715 13:35:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.715 13:35:14 -- setup/common.sh@18 -- # local node= 00:05:12.715 13:35:14 -- setup/common.sh@19 -- # local var val 00:05:12.715 13:35:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.715 13:35:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.715 13:35:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.715 13:35:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.715 13:35:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.715 13:35:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 172837468 kB' 'MemAvailable: 175720764 kB' 'Buffers: 3896 kB' 'Cached: 11767932 kB' 'SwapCached: 0 kB' 'Active: 8786748 kB' 'Inactive: 3507440 kB' 'Active(anon): 8391264 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525636 kB' 'Mapped: 214920 kB' 'Shmem: 7868904 kB' 'KReclaimable: 249644 kB' 'Slab: 838244 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 588600 kB' 'KernelStack: 20496 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 9912152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315016 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.715 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.715 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.716 13:35:14 -- setup/common.sh@33 -- # echo 0 00:05:12.716 13:35:14 -- setup/common.sh@33 -- # return 0 00:05:12.716 13:35:14 -- setup/hugepages.sh@97 -- # anon=0 00:05:12.716 13:35:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.716 13:35:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.716 13:35:14 -- setup/common.sh@18 -- # local node= 00:05:12.716 13:35:14 -- setup/common.sh@19 -- # local var val 00:05:12.716 13:35:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.716 13:35:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.716 13:35:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.716 13:35:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.716 13:35:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.716 13:35:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 172838460 kB' 'MemAvailable: 175721756 kB' 'Buffers: 3896 kB' 'Cached: 11767932 kB' 'SwapCached: 0 kB' 'Active: 8786912 kB' 'Inactive: 3507440 kB' 'Active(anon): 8391428 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525832 kB' 'Mapped: 214900 kB' 'Shmem: 7868904 kB' 'KReclaimable: 249644 kB' 'Slab: 838292 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 588648 kB' 'KernelStack: 20496 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 9912164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314984 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.716 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.716 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.717 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.717 13:35:14 -- setup/common.sh@33 -- # echo 0 00:05:12.717 13:35:14 -- setup/common.sh@33 -- # return 0 00:05:12.717 13:35:14 -- setup/hugepages.sh@99 -- # surp=0 00:05:12.717 13:35:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.717 13:35:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.717 13:35:14 -- setup/common.sh@18 -- # local node= 00:05:12.717 13:35:14 -- setup/common.sh@19 -- # local var val 00:05:12.717 13:35:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.717 13:35:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.717 13:35:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.717 13:35:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.717 13:35:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.717 13:35:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.717 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 172838460 kB' 'MemAvailable: 175721756 kB' 'Buffers: 3896 kB' 'Cached: 11767932 kB' 'SwapCached: 0 kB' 'Active: 8786956 kB' 'Inactive: 3507440 kB' 'Active(anon): 8391472 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525872 kB' 'Mapped: 214900 kB' 'Shmem: 7868904 kB' 'KReclaimable: 249644 kB' 'Slab: 838292 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 588648 kB' 'KernelStack: 20512 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 9912180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314984 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.718 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.718 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.719 13:35:14 -- setup/common.sh@33 -- # echo 0 00:05:12.719 13:35:14 -- setup/common.sh@33 -- # return 0 00:05:12.719 13:35:14 -- setup/hugepages.sh@100 -- # resv=0 00:05:12.719 13:35:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:12.719 nr_hugepages=1536 00:05:12.719 13:35:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.719 resv_hugepages=0 00:05:12.719 13:35:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.719 surplus_hugepages=0 00:05:12.719 13:35:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.719 anon_hugepages=0 00:05:12.719 13:35:14 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:12.719 13:35:14 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:12.719 13:35:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.719 13:35:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.719 13:35:14 -- setup/common.sh@18 -- # local node= 00:05:12.719 13:35:14 -- setup/common.sh@19 -- # local var val 00:05:12.719 13:35:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.719 13:35:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.719 13:35:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.719 13:35:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.719 13:35:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.719 13:35:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 172838460 kB' 'MemAvailable: 175721756 kB' 'Buffers: 3896 kB' 'Cached: 11767932 kB' 'SwapCached: 0 kB' 'Active: 8786956 kB' 'Inactive: 3507440 kB' 'Active(anon): 8391472 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525872 kB' 'Mapped: 214900 kB' 'Shmem: 7868904 kB' 'KReclaimable: 249644 kB' 'Slab: 838292 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 588648 kB' 'KernelStack: 20512 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 9912192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314984 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.719 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.719 13:35:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.720 13:35:14 -- setup/common.sh@33 -- # echo 1536 00:05:12.720 13:35:14 -- setup/common.sh@33 -- # return 0 00:05:12.720 13:35:14 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:12.720 13:35:14 -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.720 13:35:14 -- setup/hugepages.sh@27 -- # local node 00:05:12.720 13:35:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.720 13:35:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:12.720 13:35:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.720 13:35:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.720 13:35:14 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:12.720 13:35:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.720 13:35:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.720 13:35:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.720 13:35:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.720 13:35:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.720 13:35:14 -- setup/common.sh@18 -- # local node=0 00:05:12.720 13:35:14 -- setup/common.sh@19 -- # local var val 00:05:12.720 13:35:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.720 13:35:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.720 13:35:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.720 13:35:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.720 13:35:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.720 13:35:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86910852 kB' 'MemUsed: 10751832 kB' 'SwapCached: 0 kB' 'Active: 5269244 kB' 'Inactive: 3323000 kB' 'Active(anon): 5021420 kB' 'Inactive(anon): 0 kB' 'Active(file): 247824 kB' 'Inactive(file): 3323000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8454752 kB' 'Mapped: 116552 kB' 'AnonPages: 140736 kB' 'Shmem: 4883928 kB' 'KernelStack: 11960 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104880 kB' 'Slab: 365308 kB' 'SReclaimable: 104880 kB' 'SUnreclaim: 260428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.720 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.720 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@33 -- # echo 0 00:05:12.721 13:35:14 -- setup/common.sh@33 -- # return 0 00:05:12.721 13:35:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.721 13:35:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.721 13:35:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.721 13:35:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:12.721 13:35:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.721 13:35:14 -- setup/common.sh@18 -- # local node=1 00:05:12.721 13:35:14 -- setup/common.sh@19 -- # local var val 00:05:12.721 13:35:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.721 13:35:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.721 13:35:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:12.721 13:35:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:12.721 13:35:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.721 13:35:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718492 kB' 'MemFree: 85928648 kB' 'MemUsed: 7789844 kB' 'SwapCached: 0 kB' 'Active: 3517736 kB' 'Inactive: 184440 kB' 'Active(anon): 3370076 kB' 'Inactive(anon): 0 kB' 'Active(file): 147660 kB' 'Inactive(file): 184440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3317132 kB' 'Mapped: 98348 kB' 'AnonPages: 385104 kB' 'Shmem: 2985032 kB' 'KernelStack: 8536 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 144764 kB' 'Slab: 472984 kB' 'SReclaimable: 144764 kB' 'SUnreclaim: 328220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.721 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.721 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # continue 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.722 13:35:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.722 13:35:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.722 13:35:14 -- setup/common.sh@33 -- # echo 0 00:05:12.722 13:35:14 -- setup/common.sh@33 -- # return 0 00:05:12.722 13:35:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.722 13:35:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.722 13:35:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.722 13:35:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.722 13:35:14 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:12.722 node0=512 expecting 512 00:05:12.722 13:35:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.722 13:35:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.722 13:35:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.722 13:35:14 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:12.722 node1=1024 expecting 1024 00:05:12.722 13:35:14 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:12.722 00:05:12.722 real 0m3.038s 00:05:12.722 user 0m1.222s 00:05:12.722 sys 0m1.882s 00:05:12.722 13:35:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.722 13:35:14 -- common/autotest_common.sh@10 -- # set +x 00:05:12.722 ************************************ 00:05:12.722 END TEST custom_alloc 00:05:12.722 ************************************ 00:05:12.722 13:35:14 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:12.722 13:35:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:12.722 13:35:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.722 13:35:14 -- common/autotest_common.sh@10 -- # set +x 00:05:12.722 ************************************ 00:05:12.722 START TEST no_shrink_alloc 00:05:12.722 ************************************ 00:05:12.722 13:35:14 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:12.722 13:35:14 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:12.722 13:35:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:12.722 13:35:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:12.722 13:35:14 -- setup/hugepages.sh@51 -- # shift 00:05:12.722 13:35:14 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:12.722 13:35:14 -- setup/hugepages.sh@52 -- # local node_ids 00:05:12.722 13:35:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:12.722 13:35:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:12.722 13:35:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:12.722 13:35:14 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:12.722 13:35:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.722 13:35:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:12.722 13:35:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:12.723 13:35:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.723 13:35:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.723 13:35:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:12.723 13:35:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:12.723 13:35:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:12.723 13:35:14 -- setup/hugepages.sh@73 -- # return 0 00:05:12.723 13:35:14 -- setup/hugepages.sh@198 -- # setup output 00:05:12.723 13:35:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.723 13:35:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:15.267 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:15.267 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:15.267 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:15.267 13:35:17 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:15.267 13:35:17 -- setup/hugepages.sh@89 -- # local node 00:05:15.267 13:35:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.267 13:35:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.267 13:35:17 -- setup/hugepages.sh@92 -- # local surp 00:05:15.267 13:35:17 -- setup/hugepages.sh@93 -- # local resv 00:05:15.267 13:35:17 -- setup/hugepages.sh@94 -- # local anon 00:05:15.267 13:35:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.267 13:35:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.267 13:35:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.267 13:35:17 -- setup/common.sh@18 -- # local node= 00:05:15.267 13:35:17 -- setup/common.sh@19 -- # local var val 00:05:15.267 13:35:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.267 13:35:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.267 13:35:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.267 13:35:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.267 13:35:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.267 13:35:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173884616 kB' 'MemAvailable: 176767912 kB' 'Buffers: 3896 kB' 'Cached: 11768056 kB' 'SwapCached: 0 kB' 'Active: 8788412 kB' 'Inactive: 3507440 kB' 'Active(anon): 8392928 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526712 kB' 'Mapped: 215004 kB' 'Shmem: 7869028 kB' 'KReclaimable: 249644 kB' 'Slab: 837796 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 588152 kB' 'KernelStack: 20528 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9912788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314984 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.267 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.267 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.268 13:35:17 -- setup/common.sh@33 -- # echo 0 00:05:15.268 13:35:17 -- setup/common.sh@33 -- # return 0 00:05:15.268 13:35:17 -- setup/hugepages.sh@97 -- # anon=0 00:05:15.268 13:35:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.268 13:35:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.268 13:35:17 -- setup/common.sh@18 -- # local node= 00:05:15.268 13:35:17 -- setup/common.sh@19 -- # local var val 00:05:15.268 13:35:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.268 13:35:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.268 13:35:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.268 13:35:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.268 13:35:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.268 13:35:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173886360 kB' 'MemAvailable: 176769656 kB' 'Buffers: 3896 kB' 'Cached: 11768060 kB' 'SwapCached: 0 kB' 'Active: 8787552 kB' 'Inactive: 3507440 kB' 'Active(anon): 8392068 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526340 kB' 'Mapped: 214908 kB' 'Shmem: 7869032 kB' 'KReclaimable: 249644 kB' 'Slab: 837708 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 588064 kB' 'KernelStack: 20512 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9912800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314952 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.268 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.268 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.269 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.269 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.270 13:35:17 -- setup/common.sh@33 -- # echo 0 00:05:15.270 13:35:17 -- setup/common.sh@33 -- # return 0 00:05:15.270 13:35:17 -- setup/hugepages.sh@99 -- # surp=0 00:05:15.270 13:35:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.270 13:35:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.270 13:35:17 -- setup/common.sh@18 -- # local node= 00:05:15.270 13:35:17 -- setup/common.sh@19 -- # local var val 00:05:15.270 13:35:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.270 13:35:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.270 13:35:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.270 13:35:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.270 13:35:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.270 13:35:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173886436 kB' 'MemAvailable: 176769732 kB' 'Buffers: 3896 kB' 'Cached: 11768072 kB' 'SwapCached: 0 kB' 'Active: 8788136 kB' 'Inactive: 3507440 kB' 'Active(anon): 8392652 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526856 kB' 'Mapped: 215412 kB' 'Shmem: 7869044 kB' 'KReclaimable: 249644 kB' 'Slab: 837708 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 588064 kB' 'KernelStack: 20512 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9914040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.270 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.270 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.271 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.271 13:35:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.271 13:35:17 -- setup/common.sh@33 -- # echo 0 00:05:15.271 13:35:17 -- setup/common.sh@33 -- # return 0 00:05:15.271 13:35:17 -- setup/hugepages.sh@100 -- # resv=0 00:05:15.271 13:35:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:15.271 nr_hugepages=1024 00:05:15.272 13:35:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.272 resv_hugepages=0 00:05:15.272 13:35:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.272 surplus_hugepages=0 00:05:15.272 13:35:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.272 anon_hugepages=0 00:05:15.272 13:35:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.272 13:35:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:15.272 13:35:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.272 13:35:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.272 13:35:17 -- setup/common.sh@18 -- # local node= 00:05:15.272 13:35:17 -- setup/common.sh@19 -- # local var val 00:05:15.272 13:35:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.272 13:35:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.272 13:35:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.272 13:35:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.272 13:35:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.272 13:35:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.272 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.272 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.272 13:35:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173890016 kB' 'MemAvailable: 176773312 kB' 'Buffers: 3896 kB' 'Cached: 11768084 kB' 'SwapCached: 0 kB' 'Active: 8792856 kB' 'Inactive: 3507440 kB' 'Active(anon): 8397372 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531640 kB' 'Mapped: 215612 kB' 'Shmem: 7869056 kB' 'KReclaimable: 249644 kB' 'Slab: 837708 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 588064 kB' 'KernelStack: 20512 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9918948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314940 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:15.272 13:35:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.272 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.272 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.272 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.272 13:35:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.272 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.272 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.272 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.272 13:35:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.272 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.272 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.272 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.272 13:35:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.272 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.272 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.272 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.272 13:35:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.272 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.272 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.272 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.272 13:35:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.273 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.273 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.274 13:35:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.274 13:35:17 -- setup/common.sh@33 -- # echo 1024 00:05:15.274 13:35:17 -- setup/common.sh@33 -- # return 0 00:05:15.274 13:35:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.274 13:35:17 -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.274 13:35:17 -- setup/hugepages.sh@27 -- # local node 00:05:15.274 13:35:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.274 13:35:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:15.274 13:35:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.274 13:35:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:15.274 13:35:17 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:15.274 13:35:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.274 13:35:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.274 13:35:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.274 13:35:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.274 13:35:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.274 13:35:17 -- setup/common.sh@18 -- # local node=0 00:05:15.274 13:35:17 -- setup/common.sh@19 -- # local var val 00:05:15.274 13:35:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:15.274 13:35:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.274 13:35:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.274 13:35:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.274 13:35:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.274 13:35:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.274 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85878332 kB' 'MemUsed: 11784352 kB' 'SwapCached: 0 kB' 'Active: 5269048 kB' 'Inactive: 3323000 kB' 'Active(anon): 5021224 kB' 'Inactive(anon): 0 kB' 'Active(file): 247824 kB' 'Inactive(file): 3323000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8454772 kB' 'Mapped: 116836 kB' 'AnonPages: 140468 kB' 'Shmem: 4883948 kB' 'KernelStack: 11928 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104880 kB' 'Slab: 365028 kB' 'SReclaimable: 104880 kB' 'SUnreclaim: 260148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.275 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.275 13:35:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # continue 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:15.276 13:35:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:15.276 13:35:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.276 13:35:17 -- setup/common.sh@33 -- # echo 0 00:05:15.276 13:35:17 -- setup/common.sh@33 -- # return 0 00:05:15.276 13:35:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.276 13:35:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.276 13:35:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.276 13:35:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.276 13:35:17 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:15.276 node0=1024 expecting 1024 00:05:15.276 13:35:17 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:15.276 13:35:17 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:15.276 13:35:17 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:15.276 13:35:17 -- setup/hugepages.sh@202 -- # setup output 00:05:15.276 13:35:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.276 13:35:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:17.844 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:17.844 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:17.844 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:17.844 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:17.844 13:35:20 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:17.844 13:35:20 -- setup/hugepages.sh@89 -- # local node 00:05:17.844 13:35:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.844 13:35:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.844 13:35:20 -- setup/hugepages.sh@92 -- # local surp 00:05:17.844 13:35:20 -- setup/hugepages.sh@93 -- # local resv 00:05:17.844 13:35:20 -- setup/hugepages.sh@94 -- # local anon 00:05:17.844 13:35:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.844 13:35:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.844 13:35:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.844 13:35:20 -- setup/common.sh@18 -- # local node= 00:05:17.844 13:35:20 -- setup/common.sh@19 -- # local var val 00:05:17.844 13:35:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.844 13:35:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.844 13:35:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.844 13:35:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.844 13:35:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.844 13:35:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173905636 kB' 'MemAvailable: 176788932 kB' 'Buffers: 3896 kB' 'Cached: 11768152 kB' 'SwapCached: 0 kB' 'Active: 8788592 kB' 'Inactive: 3507440 kB' 'Active(anon): 8393108 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526768 kB' 'Mapped: 215008 kB' 'Shmem: 7869124 kB' 'KReclaimable: 249644 kB' 'Slab: 837188 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 587544 kB' 'KernelStack: 20512 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9913144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314984 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.844 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.844 13:35:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.845 13:35:20 -- setup/common.sh@33 -- # echo 0 00:05:17.845 13:35:20 -- setup/common.sh@33 -- # return 0 00:05:17.845 13:35:20 -- setup/hugepages.sh@97 -- # anon=0 00:05:17.845 13:35:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.845 13:35:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.845 13:35:20 -- setup/common.sh@18 -- # local node= 00:05:17.845 13:35:20 -- setup/common.sh@19 -- # local var val 00:05:17.845 13:35:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.845 13:35:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.845 13:35:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.845 13:35:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.845 13:35:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.845 13:35:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173906192 kB' 'MemAvailable: 176789488 kB' 'Buffers: 3896 kB' 'Cached: 11768156 kB' 'SwapCached: 0 kB' 'Active: 8788128 kB' 'Inactive: 3507440 kB' 'Active(anon): 8392644 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526360 kB' 'Mapped: 214988 kB' 'Shmem: 7869128 kB' 'KReclaimable: 249644 kB' 'Slab: 837188 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 587544 kB' 'KernelStack: 20480 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9913156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314952 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.845 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.845 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.846 13:35:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.846 13:35:20 -- setup/common.sh@33 -- # echo 0 00:05:17.846 13:35:20 -- setup/common.sh@33 -- # return 0 00:05:17.846 13:35:20 -- setup/hugepages.sh@99 -- # surp=0 00:05:17.846 13:35:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.846 13:35:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.846 13:35:20 -- setup/common.sh@18 -- # local node= 00:05:17.846 13:35:20 -- setup/common.sh@19 -- # local var val 00:05:17.846 13:35:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.846 13:35:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.846 13:35:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.846 13:35:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.846 13:35:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.846 13:35:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.846 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173905940 kB' 'MemAvailable: 176789236 kB' 'Buffers: 3896 kB' 'Cached: 11768156 kB' 'SwapCached: 0 kB' 'Active: 8788204 kB' 'Inactive: 3507440 kB' 'Active(anon): 8392720 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526476 kB' 'Mapped: 214988 kB' 'Shmem: 7869128 kB' 'KReclaimable: 249644 kB' 'Slab: 837188 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 587544 kB' 'KernelStack: 20496 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9913172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.847 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.847 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.848 13:35:20 -- setup/common.sh@33 -- # echo 0 00:05:17.848 13:35:20 -- setup/common.sh@33 -- # return 0 00:05:17.848 13:35:20 -- setup/hugepages.sh@100 -- # resv=0 00:05:17.848 13:35:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:17.848 nr_hugepages=1024 00:05:17.848 13:35:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.848 resv_hugepages=0 00:05:17.848 13:35:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.848 surplus_hugepages=0 00:05:17.848 13:35:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.848 anon_hugepages=0 00:05:17.848 13:35:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.848 13:35:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:17.848 13:35:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.848 13:35:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.848 13:35:20 -- setup/common.sh@18 -- # local node= 00:05:17.848 13:35:20 -- setup/common.sh@19 -- # local var val 00:05:17.848 13:35:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.848 13:35:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.848 13:35:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.848 13:35:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.848 13:35:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.848 13:35:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381176 kB' 'MemFree: 173906724 kB' 'MemAvailable: 176790020 kB' 'Buffers: 3896 kB' 'Cached: 11768184 kB' 'SwapCached: 0 kB' 'Active: 8788304 kB' 'Inactive: 3507440 kB' 'Active(anon): 8392820 kB' 'Inactive(anon): 0 kB' 'Active(file): 395484 kB' 'Inactive(file): 3507440 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526480 kB' 'Mapped: 214988 kB' 'Shmem: 7869156 kB' 'KReclaimable: 249644 kB' 'Slab: 837188 kB' 'SReclaimable: 249644 kB' 'SUnreclaim: 587544 kB' 'KernelStack: 20496 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 9913184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314936 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2915284 kB' 'DirectMap2M: 12492800 kB' 'DirectMap1G: 186646528 kB' 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.848 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.848 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.849 13:35:20 -- setup/common.sh@32 -- # continue 00:05:17.849 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.110 13:35:20 -- setup/common.sh@33 -- # echo 1024 00:05:18.110 13:35:20 -- setup/common.sh@33 -- # return 0 00:05:18.110 13:35:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.110 13:35:20 -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.110 13:35:20 -- setup/hugepages.sh@27 -- # local node 00:05:18.110 13:35:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.110 13:35:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:18.110 13:35:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.110 13:35:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:18.110 13:35:20 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:18.110 13:35:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.110 13:35:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.110 13:35:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.110 13:35:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.110 13:35:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.110 13:35:20 -- setup/common.sh@18 -- # local node=0 00:05:18.110 13:35:20 -- setup/common.sh@19 -- # local var val 00:05:18.110 13:35:20 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.110 13:35:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.110 13:35:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.110 13:35:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.110 13:35:20 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.110 13:35:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85892664 kB' 'MemUsed: 11770020 kB' 'SwapCached: 0 kB' 'Active: 5269160 kB' 'Inactive: 3323000 kB' 'Active(anon): 5021336 kB' 'Inactive(anon): 0 kB' 'Active(file): 247824 kB' 'Inactive(file): 3323000 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8454780 kB' 'Mapped: 116628 kB' 'AnonPages: 140092 kB' 'Shmem: 4883956 kB' 'KernelStack: 11928 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104880 kB' 'Slab: 364884 kB' 'SReclaimable: 104880 kB' 'SUnreclaim: 260004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.110 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.110 13:35:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # continue 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.111 13:35:20 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.111 13:35:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.111 13:35:20 -- setup/common.sh@33 -- # echo 0 00:05:18.111 13:35:20 -- setup/common.sh@33 -- # return 0 00:05:18.111 13:35:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.111 13:35:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.111 13:35:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.111 13:35:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.111 13:35:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:18.111 node0=1024 expecting 1024 00:05:18.111 13:35:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:18.111 00:05:18.111 real 0m5.541s 00:05:18.111 user 0m2.150s 00:05:18.111 sys 0m3.475s 00:05:18.111 13:35:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.111 13:35:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.111 ************************************ 00:05:18.111 END TEST no_shrink_alloc 00:05:18.111 ************************************ 00:05:18.111 13:35:20 -- setup/hugepages.sh@217 -- # clear_hp 00:05:18.111 13:35:20 -- setup/hugepages.sh@37 -- # local node hp 00:05:18.111 13:35:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:18.111 13:35:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.111 13:35:20 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.111 13:35:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.111 13:35:20 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.111 13:35:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:18.111 13:35:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.111 13:35:20 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.111 13:35:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:18.111 13:35:20 -- setup/hugepages.sh@41 -- # echo 0 00:05:18.111 13:35:20 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:18.111 13:35:20 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:18.111 00:05:18.111 real 0m21.598s 00:05:18.111 user 0m8.338s 00:05:18.111 sys 0m12.876s 00:05:18.111 13:35:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.112 13:35:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.112 ************************************ 00:05:18.112 END TEST hugepages 00:05:18.112 ************************************ 00:05:18.112 13:35:20 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:18.112 13:35:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.112 13:35:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.112 13:35:20 -- common/autotest_common.sh@10 -- # set +x 00:05:18.112 ************************************ 00:05:18.112 START TEST driver 00:05:18.112 ************************************ 00:05:18.112 13:35:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:18.112 * Looking for test storage... 00:05:18.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:18.112 13:35:20 -- setup/driver.sh@68 -- # setup reset 00:05:18.112 13:35:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.112 13:35:20 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.307 13:35:24 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:22.307 13:35:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.307 13:35:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.307 13:35:24 -- common/autotest_common.sh@10 -- # set +x 00:05:22.307 ************************************ 00:05:22.307 START TEST guess_driver 00:05:22.307 ************************************ 00:05:22.307 13:35:24 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:22.307 13:35:24 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:22.307 13:35:24 -- setup/driver.sh@47 -- # local fail=0 00:05:22.307 13:35:24 -- setup/driver.sh@49 -- # pick_driver 00:05:22.307 13:35:24 -- setup/driver.sh@36 -- # vfio 00:05:22.307 13:35:24 -- setup/driver.sh@21 -- # local iommu_grups 00:05:22.307 13:35:24 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:22.307 13:35:24 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:22.307 13:35:24 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:22.307 13:35:24 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:22.307 13:35:24 -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:05:22.307 13:35:24 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:22.307 13:35:24 -- setup/driver.sh@14 -- # mod vfio_pci 00:05:22.307 13:35:24 -- setup/driver.sh@12 -- # dep vfio_pci 00:05:22.307 13:35:24 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:22.307 13:35:24 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:22.307 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:22.307 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:22.307 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:22.307 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:22.307 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:22.307 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:22.307 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:22.307 13:35:24 -- setup/driver.sh@30 -- # return 0 00:05:22.307 13:35:24 -- setup/driver.sh@37 -- # echo vfio-pci 00:05:22.307 13:35:24 -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:22.307 13:35:24 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:22.307 13:35:24 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:22.307 Looking for driver=vfio-pci 00:05:22.307 13:35:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.308 13:35:24 -- setup/driver.sh@45 -- # setup output config 00:05:22.308 13:35:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.308 13:35:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:24.844 13:35:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:24.844 13:35:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:24.844 13:35:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.412 13:35:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:25.412 13:35:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:25.412 13:35:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:25.671 13:35:27 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:25.671 13:35:27 -- setup/driver.sh@65 -- # setup reset 00:05:25.671 13:35:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:25.671 13:35:27 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.864 00:05:29.865 real 0m7.398s 00:05:29.865 user 0m2.022s 00:05:29.865 sys 0m3.756s 00:05:29.865 13:35:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.865 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:05:29.865 ************************************ 00:05:29.865 END TEST guess_driver 00:05:29.865 ************************************ 00:05:29.865 00:05:29.865 real 0m11.450s 00:05:29.865 user 0m3.215s 00:05:29.865 sys 0m5.880s 00:05:29.865 13:35:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.865 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:05:29.865 ************************************ 00:05:29.865 END TEST driver 00:05:29.865 ************************************ 00:05:29.865 13:35:31 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:29.865 13:35:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.865 13:35:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.865 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:05:29.865 ************************************ 00:05:29.865 START TEST devices 00:05:29.865 ************************************ 00:05:29.865 13:35:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:29.865 * Looking for test storage... 00:05:29.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:29.865 13:35:31 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:29.865 13:35:31 -- setup/devices.sh@192 -- # setup reset 00:05:29.865 13:35:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.865 13:35:31 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:33.214 13:35:34 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:33.214 13:35:34 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:33.214 13:35:34 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:33.214 13:35:34 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:33.214 13:35:34 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:33.214 13:35:34 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:33.214 13:35:34 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:33.214 13:35:34 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:33.214 13:35:34 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:33.214 13:35:34 -- setup/devices.sh@196 -- # blocks=() 00:05:33.214 13:35:34 -- setup/devices.sh@196 -- # declare -a blocks 00:05:33.214 13:35:34 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:33.214 13:35:34 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:33.214 13:35:34 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:33.214 13:35:34 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:33.214 13:35:34 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:33.214 13:35:34 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:33.214 13:35:34 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:05:33.214 13:35:34 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:05:33.214 13:35:34 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:33.214 13:35:34 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:33.214 13:35:34 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:33.214 No valid GPT data, bailing 00:05:33.214 13:35:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:33.214 13:35:34 -- scripts/common.sh@393 -- # pt= 00:05:33.214 13:35:34 -- scripts/common.sh@394 -- # return 1 00:05:33.214 13:35:35 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:33.214 13:35:35 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:33.214 13:35:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:33.214 13:35:35 -- setup/common.sh@80 -- # echo 1000204886016 00:05:33.214 13:35:35 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:33.214 13:35:35 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:33.214 13:35:35 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:05:33.214 13:35:35 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:33.214 13:35:35 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:33.214 13:35:35 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:33.214 13:35:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:33.214 13:35:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.214 13:35:35 -- common/autotest_common.sh@10 -- # set +x 00:05:33.214 ************************************ 00:05:33.214 START TEST nvme_mount 00:05:33.214 ************************************ 00:05:33.214 13:35:35 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:33.214 13:35:35 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:33.214 13:35:35 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:33.214 13:35:35 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.214 13:35:35 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.214 13:35:35 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:33.214 13:35:35 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:33.214 13:35:35 -- setup/common.sh@40 -- # local part_no=1 00:05:33.214 13:35:35 -- setup/common.sh@41 -- # local size=1073741824 00:05:33.214 13:35:35 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:33.214 13:35:35 -- setup/common.sh@44 -- # parts=() 00:05:33.214 13:35:35 -- setup/common.sh@44 -- # local parts 00:05:33.214 13:35:35 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:33.214 13:35:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.214 13:35:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:33.214 13:35:35 -- setup/common.sh@46 -- # (( part++ )) 00:05:33.214 13:35:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:33.214 13:35:35 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:33.214 13:35:35 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:33.214 13:35:35 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:33.782 Creating new GPT entries in memory. 00:05:33.782 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:33.782 other utilities. 00:05:33.782 13:35:36 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:33.782 13:35:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.782 13:35:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:33.782 13:35:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:33.782 13:35:36 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:34.719 Creating new GPT entries in memory. 00:05:34.719 The operation has completed successfully. 00:05:34.719 13:35:37 -- setup/common.sh@57 -- # (( part++ )) 00:05:34.719 13:35:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:34.719 13:35:37 -- setup/common.sh@62 -- # wait 1401894 00:05:34.719 13:35:37 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.719 13:35:37 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:34.719 13:35:37 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.719 13:35:37 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:34.719 13:35:37 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:34.719 13:35:37 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.719 13:35:37 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:34.719 13:35:37 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:34.719 13:35:37 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:34.719 13:35:37 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:34.719 13:35:37 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:34.719 13:35:37 -- setup/devices.sh@53 -- # local found=0 00:05:34.719 13:35:37 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:34.719 13:35:37 -- setup/devices.sh@56 -- # : 00:05:34.719 13:35:37 -- setup/devices.sh@59 -- # local pci status 00:05:34.719 13:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.719 13:35:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:34.719 13:35:37 -- setup/devices.sh@47 -- # setup output config 00:05:34.719 13:35:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.719 13:35:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:37.258 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.258 13:35:39 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:37.258 13:35:39 -- setup/devices.sh@63 -- # found=1 00:05:37.258 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.258 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.258 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.258 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.258 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.258 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.258 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.517 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.517 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.518 13:35:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.518 13:35:39 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:37.518 13:35:39 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.518 13:35:39 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:37.518 13:35:39 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:37.518 13:35:39 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:37.518 13:35:39 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.518 13:35:39 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.518 13:35:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:37.518 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:37.518 13:35:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:37.518 13:35:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:37.777 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:37.777 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:37.777 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:37.777 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:37.777 13:35:40 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:38.036 13:35:40 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:38.037 13:35:40 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:38.037 13:35:40 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:38.037 13:35:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:38.037 13:35:40 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:38.037 13:35:40 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:38.037 13:35:40 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:38.037 13:35:40 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:38.037 13:35:40 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:38.037 13:35:40 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:38.037 13:35:40 -- setup/devices.sh@53 -- # local found=0 00:05:38.037 13:35:40 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:38.037 13:35:40 -- setup/devices.sh@56 -- # : 00:05:38.037 13:35:40 -- setup/devices.sh@59 -- # local pci status 00:05:38.037 13:35:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.037 13:35:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:38.037 13:35:40 -- setup/devices.sh@47 -- # setup output config 00:05:38.037 13:35:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.037 13:35:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:40.675 13:35:42 -- setup/devices.sh@63 -- # found=1 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:40.675 13:35:42 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:40.675 13:35:42 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:40.675 13:35:42 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:40.675 13:35:42 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:40.675 13:35:42 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:40.675 13:35:42 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:05:40.675 13:35:42 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:40.675 13:35:42 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:40.675 13:35:42 -- setup/devices.sh@50 -- # local mount_point= 00:05:40.675 13:35:42 -- setup/devices.sh@51 -- # local test_file= 00:05:40.675 13:35:42 -- setup/devices.sh@53 -- # local found=0 00:05:40.675 13:35:42 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:40.675 13:35:42 -- setup/devices.sh@59 -- # local pci status 00:05:40.675 13:35:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.675 13:35:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:40.675 13:35:42 -- setup/devices.sh@47 -- # setup output config 00:05:40.675 13:35:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.675 13:35:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:43.211 13:35:45 -- setup/devices.sh@63 -- # found=1 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.211 13:35:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:43.211 13:35:45 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:43.211 13:35:45 -- setup/devices.sh@68 -- # return 0 00:05:43.211 13:35:45 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:43.211 13:35:45 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:43.211 13:35:45 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:43.211 13:35:45 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:43.211 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:43.211 00:05:43.211 real 0m10.434s 00:05:43.211 user 0m3.049s 00:05:43.211 sys 0m5.188s 00:05:43.211 13:35:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.211 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:05:43.211 ************************************ 00:05:43.211 END TEST nvme_mount 00:05:43.211 ************************************ 00:05:43.211 13:35:45 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:43.211 13:35:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.211 13:35:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.211 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:05:43.211 ************************************ 00:05:43.211 START TEST dm_mount 00:05:43.211 ************************************ 00:05:43.211 13:35:45 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:43.211 13:35:45 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:43.211 13:35:45 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:43.211 13:35:45 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:43.211 13:35:45 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:43.211 13:35:45 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:43.211 13:35:45 -- setup/common.sh@40 -- # local part_no=2 00:05:43.211 13:35:45 -- setup/common.sh@41 -- # local size=1073741824 00:05:43.211 13:35:45 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:43.211 13:35:45 -- setup/common.sh@44 -- # parts=() 00:05:43.211 13:35:45 -- setup/common.sh@44 -- # local parts 00:05:43.211 13:35:45 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:43.211 13:35:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:43.211 13:35:45 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:43.211 13:35:45 -- setup/common.sh@46 -- # (( part++ )) 00:05:43.211 13:35:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:43.211 13:35:45 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:43.211 13:35:45 -- setup/common.sh@46 -- # (( part++ )) 00:05:43.211 13:35:45 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:43.211 13:35:45 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:43.211 13:35:45 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:43.211 13:35:45 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:44.149 Creating new GPT entries in memory. 00:05:44.149 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:44.149 other utilities. 00:05:44.149 13:35:46 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:44.149 13:35:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:44.149 13:35:46 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:44.149 13:35:46 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:44.149 13:35:46 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:45.087 Creating new GPT entries in memory. 00:05:45.087 The operation has completed successfully. 00:05:45.087 13:35:47 -- setup/common.sh@57 -- # (( part++ )) 00:05:45.087 13:35:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:45.087 13:35:47 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:45.087 13:35:47 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:45.087 13:35:47 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:46.467 The operation has completed successfully. 00:05:46.467 13:35:48 -- setup/common.sh@57 -- # (( part++ )) 00:05:46.467 13:35:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.467 13:35:48 -- setup/common.sh@62 -- # wait 1406001 00:05:46.467 13:35:48 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:46.467 13:35:48 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:46.467 13:35:48 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:46.467 13:35:48 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:46.467 13:35:48 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:46.467 13:35:48 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:46.467 13:35:48 -- setup/devices.sh@161 -- # break 00:05:46.467 13:35:48 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:46.467 13:35:48 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:46.467 13:35:48 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:46.467 13:35:48 -- setup/devices.sh@166 -- # dm=dm-2 00:05:46.467 13:35:48 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:46.467 13:35:48 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:46.467 13:35:48 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:46.467 13:35:48 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:46.467 13:35:48 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:46.467 13:35:48 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:46.467 13:35:48 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:46.468 13:35:48 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:46.468 13:35:48 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:46.468 13:35:48 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:46.468 13:35:48 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:46.468 13:35:48 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:46.468 13:35:48 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:46.468 13:35:48 -- setup/devices.sh@53 -- # local found=0 00:05:46.468 13:35:48 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:46.468 13:35:48 -- setup/devices.sh@56 -- # : 00:05:46.468 13:35:48 -- setup/devices.sh@59 -- # local pci status 00:05:46.468 13:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.468 13:35:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:46.468 13:35:48 -- setup/devices.sh@47 -- # setup output config 00:05:46.468 13:35:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.468 13:35:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:49.003 13:35:51 -- setup/devices.sh@63 -- # found=1 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.003 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.003 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.004 13:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:49.004 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.004 13:35:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.004 13:35:51 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:49.004 13:35:51 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:49.004 13:35:51 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:49.004 13:35:51 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:49.004 13:35:51 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:49.004 13:35:51 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:49.004 13:35:51 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:49.004 13:35:51 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:49.004 13:35:51 -- setup/devices.sh@50 -- # local mount_point= 00:05:49.004 13:35:51 -- setup/devices.sh@51 -- # local test_file= 00:05:49.004 13:35:51 -- setup/devices.sh@53 -- # local found=0 00:05:49.004 13:35:51 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:49.004 13:35:51 -- setup/devices.sh@59 -- # local pci status 00:05:49.004 13:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.004 13:35:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:49.004 13:35:51 -- setup/devices.sh@47 -- # setup output config 00:05:49.004 13:35:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.004 13:35:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:51.540 13:35:53 -- setup/devices.sh@63 -- # found=1 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.540 13:35:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:51.540 13:35:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.798 13:35:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:51.798 13:35:54 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:51.798 13:35:54 -- setup/devices.sh@68 -- # return 0 00:05:51.798 13:35:54 -- setup/devices.sh@187 -- # cleanup_dm 00:05:51.798 13:35:54 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:51.798 13:35:54 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:51.798 13:35:54 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:51.798 13:35:54 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.798 13:35:54 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:51.798 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:51.798 13:35:54 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:51.798 13:35:54 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:51.798 00:05:51.799 real 0m8.646s 00:05:51.799 user 0m2.092s 00:05:51.799 sys 0m3.599s 00:05:51.799 13:35:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.799 13:35:54 -- common/autotest_common.sh@10 -- # set +x 00:05:51.799 ************************************ 00:05:51.799 END TEST dm_mount 00:05:51.799 ************************************ 00:05:51.799 13:35:54 -- setup/devices.sh@1 -- # cleanup 00:05:51.799 13:35:54 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:51.799 13:35:54 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:51.799 13:35:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.799 13:35:54 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:51.799 13:35:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:51.799 13:35:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:52.057 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:52.057 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:52.057 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:52.057 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:52.057 13:35:54 -- setup/devices.sh@12 -- # cleanup_dm 00:05:52.057 13:35:54 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:52.057 13:35:54 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:52.057 13:35:54 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:52.057 13:35:54 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:52.057 13:35:54 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:52.057 13:35:54 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:52.057 00:05:52.057 real 0m22.566s 00:05:52.057 user 0m6.409s 00:05:52.057 sys 0m10.884s 00:05:52.058 13:35:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.058 13:35:54 -- common/autotest_common.sh@10 -- # set +x 00:05:52.058 ************************************ 00:05:52.058 END TEST devices 00:05:52.058 ************************************ 00:05:52.058 00:05:52.058 real 1m13.997s 00:05:52.058 user 0m23.944s 00:05:52.058 sys 0m40.394s 00:05:52.058 13:35:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.058 13:35:54 -- common/autotest_common.sh@10 -- # set +x 00:05:52.058 ************************************ 00:05:52.058 END TEST setup.sh 00:05:52.058 ************************************ 00:05:52.315 13:35:54 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:54.851 Hugepages 00:05:54.851 node hugesize free / total 00:05:54.851 node0 1048576kB 0 / 0 00:05:54.851 node0 2048kB 2048 / 2048 00:05:54.851 node1 1048576kB 0 / 0 00:05:54.851 node1 2048kB 0 / 0 00:05:54.851 00:05:54.851 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:54.851 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:54.851 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:54.851 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:54.851 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:54.851 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:54.851 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:54.851 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:54.851 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:54.851 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:54.851 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:54.851 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:54.851 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:54.851 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:54.851 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:54.851 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:54.851 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:54.851 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:54.851 13:35:57 -- spdk/autotest.sh@141 -- # uname -s 00:05:54.851 13:35:57 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:54.851 13:35:57 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:54.851 13:35:57 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:57.386 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:57.386 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:57.386 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:57.386 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:57.386 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:57.645 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:57.645 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:57.645 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:57.645 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:57.645 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:57.645 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:57.645 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:57.645 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:57.645 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:57.645 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:57.645 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:58.581 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:58.581 13:36:00 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:59.539 13:36:01 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:59.539 13:36:01 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:59.539 13:36:01 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:59.539 13:36:01 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:59.539 13:36:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:59.539 13:36:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:59.539 13:36:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:59.539 13:36:01 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:59.539 13:36:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:59.539 13:36:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:59.539 13:36:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:59.539 13:36:01 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:02.108 Waiting for block devices as requested 00:06:02.108 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:06:02.366 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:02.366 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:02.366 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:02.366 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:02.625 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:02.625 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:02.625 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:02.884 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:02.884 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:02.884 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:02.884 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:03.143 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:03.143 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:03.143 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:03.402 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:03.402 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:03.402 13:36:05 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:06:03.402 13:36:05 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:06:03.402 13:36:05 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:03.402 13:36:05 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:06:03.402 13:36:05 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:03.402 13:36:05 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:06:03.402 13:36:05 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:03.402 13:36:05 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:03.402 13:36:05 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:06:03.402 13:36:05 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:06:03.402 13:36:05 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:06:03.402 13:36:05 -- common/autotest_common.sh@1530 -- # grep oacs 00:06:03.402 13:36:05 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:06:03.402 13:36:05 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:06:03.402 13:36:05 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:06:03.402 13:36:05 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:06:03.402 13:36:05 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:06:03.402 13:36:05 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:06:03.402 13:36:05 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:06:03.402 13:36:05 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:06:03.402 13:36:05 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:06:03.402 13:36:05 -- common/autotest_common.sh@1542 -- # continue 00:06:03.402 13:36:05 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:06:03.402 13:36:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:03.402 13:36:05 -- common/autotest_common.sh@10 -- # set +x 00:06:03.402 13:36:05 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:06:03.402 13:36:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:03.402 13:36:05 -- common/autotest_common.sh@10 -- # set +x 00:06:03.402 13:36:05 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:06.686 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:06.686 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:06.944 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:07.201 13:36:09 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:06:07.201 13:36:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:07.201 13:36:09 -- common/autotest_common.sh@10 -- # set +x 00:06:07.201 13:36:09 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:06:07.201 13:36:09 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:07.202 13:36:09 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:07.202 13:36:09 -- common/autotest_common.sh@1562 -- # bdfs=() 00:06:07.202 13:36:09 -- common/autotest_common.sh@1562 -- # local bdfs 00:06:07.202 13:36:09 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:07.202 13:36:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:07.202 13:36:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:07.202 13:36:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:07.202 13:36:09 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:07.202 13:36:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:07.202 13:36:09 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:07.202 13:36:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:06:07.202 13:36:09 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:06:07.202 13:36:09 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:06:07.202 13:36:09 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:06:07.202 13:36:09 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:07.202 13:36:09 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:06:07.202 13:36:09 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:06:07.202 13:36:09 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:06:07.202 13:36:09 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:07.202 13:36:09 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1415499 00:06:07.202 13:36:09 -- common/autotest_common.sh@1583 -- # waitforlisten 1415499 00:06:07.202 13:36:09 -- common/autotest_common.sh@819 -- # '[' -z 1415499 ']' 00:06:07.202 13:36:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.202 13:36:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.202 13:36:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.202 13:36:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.202 13:36:09 -- common/autotest_common.sh@10 -- # set +x 00:06:07.202 [2024-07-11 13:36:09.605096] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:07.202 [2024-07-11 13:36:09.605140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415499 ] 00:06:07.202 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.458 [2024-07-11 13:36:09.659565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.458 [2024-07-11 13:36:09.700555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.458 [2024-07-11 13:36:09.700701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.022 13:36:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.022 13:36:10 -- common/autotest_common.sh@852 -- # return 0 00:06:08.022 13:36:10 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:06:08.022 13:36:10 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:06:08.022 13:36:10 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:06:11.299 nvme0n1 00:06:11.299 13:36:13 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:11.299 [2024-07-11 13:36:13.572259] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:11.299 request: 00:06:11.299 { 00:06:11.299 "nvme_ctrlr_name": "nvme0", 00:06:11.299 "password": "test", 00:06:11.299 "method": "bdev_nvme_opal_revert", 00:06:11.299 "req_id": 1 00:06:11.299 } 00:06:11.299 Got JSON-RPC error response 00:06:11.299 response: 00:06:11.299 { 00:06:11.299 "code": -32602, 00:06:11.299 "message": "Invalid parameters" 00:06:11.299 } 00:06:11.299 13:36:13 -- common/autotest_common.sh@1589 -- # true 00:06:11.299 13:36:13 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:06:11.299 13:36:13 -- common/autotest_common.sh@1593 -- # killprocess 1415499 00:06:11.299 13:36:13 -- common/autotest_common.sh@926 -- # '[' -z 1415499 ']' 00:06:11.299 13:36:13 -- common/autotest_common.sh@930 -- # kill -0 1415499 00:06:11.299 13:36:13 -- common/autotest_common.sh@931 -- # uname 00:06:11.299 13:36:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:11.299 13:36:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1415499 00:06:11.300 13:36:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:11.300 13:36:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:11.300 13:36:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1415499' 00:06:11.300 killing process with pid 1415499 00:06:11.300 13:36:13 -- common/autotest_common.sh@945 -- # kill 1415499 00:06:11.300 13:36:13 -- common/autotest_common.sh@950 -- # wait 1415499 00:06:13.197 13:36:15 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:06:13.197 13:36:15 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:06:13.197 13:36:15 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:06:13.197 13:36:15 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:06:13.197 13:36:15 -- spdk/autotest.sh@173 -- # timing_enter lib 00:06:13.197 13:36:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:13.197 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:06:13.197 13:36:15 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:13.197 13:36:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.197 13:36:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.197 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:06:13.197 ************************************ 00:06:13.197 START TEST env 00:06:13.197 ************************************ 00:06:13.197 13:36:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:13.197 * Looking for test storage... 00:06:13.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:13.197 13:36:15 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:13.197 13:36:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.197 13:36:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.197 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:06:13.197 ************************************ 00:06:13.197 START TEST env_memory 00:06:13.197 ************************************ 00:06:13.197 13:36:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:13.197 00:06:13.197 00:06:13.197 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.197 http://cunit.sourceforge.net/ 00:06:13.197 00:06:13.197 00:06:13.197 Suite: memory 00:06:13.197 Test: alloc and free memory map ...[2024-07-11 13:36:15.330852] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:13.197 passed 00:06:13.197 Test: mem map translation ...[2024-07-11 13:36:15.349299] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:13.197 [2024-07-11 13:36:15.349312] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:13.197 [2024-07-11 13:36:15.349365] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:13.197 [2024-07-11 13:36:15.349371] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:13.197 passed 00:06:13.197 Test: mem map registration ...[2024-07-11 13:36:15.386298] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:13.197 [2024-07-11 13:36:15.386310] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:13.197 passed 00:06:13.197 Test: mem map adjacent registrations ...passed 00:06:13.197 00:06:13.197 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.197 suites 1 1 n/a 0 0 00:06:13.197 tests 4 4 4 0 0 00:06:13.197 asserts 152 152 152 0 n/a 00:06:13.197 00:06:13.197 Elapsed time = 0.137 seconds 00:06:13.197 00:06:13.197 real 0m0.150s 00:06:13.197 user 0m0.140s 00:06:13.197 sys 0m0.009s 00:06:13.197 13:36:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.197 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:06:13.197 ************************************ 00:06:13.197 END TEST env_memory 00:06:13.197 ************************************ 00:06:13.197 13:36:15 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:13.197 13:36:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.197 13:36:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.197 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:06:13.197 ************************************ 00:06:13.197 START TEST env_vtophys 00:06:13.197 ************************************ 00:06:13.197 13:36:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:13.197 EAL: lib.eal log level changed from notice to debug 00:06:13.197 EAL: Detected lcore 0 as core 0 on socket 0 00:06:13.197 EAL: Detected lcore 1 as core 1 on socket 0 00:06:13.197 EAL: Detected lcore 2 as core 2 on socket 0 00:06:13.197 EAL: Detected lcore 3 as core 3 on socket 0 00:06:13.197 EAL: Detected lcore 4 as core 4 on socket 0 00:06:13.197 EAL: Detected lcore 5 as core 5 on socket 0 00:06:13.197 EAL: Detected lcore 6 as core 6 on socket 0 00:06:13.197 EAL: Detected lcore 7 as core 8 on socket 0 00:06:13.197 EAL: Detected lcore 8 as core 9 on socket 0 00:06:13.197 EAL: Detected lcore 9 as core 10 on socket 0 00:06:13.197 EAL: Detected lcore 10 as core 11 on socket 0 00:06:13.197 EAL: Detected lcore 11 as core 12 on socket 0 00:06:13.197 EAL: Detected lcore 12 as core 13 on socket 0 00:06:13.197 EAL: Detected lcore 13 as core 16 on socket 0 00:06:13.197 EAL: Detected lcore 14 as core 17 on socket 0 00:06:13.197 EAL: Detected lcore 15 as core 18 on socket 0 00:06:13.197 EAL: Detected lcore 16 as core 19 on socket 0 00:06:13.197 EAL: Detected lcore 17 as core 20 on socket 0 00:06:13.197 EAL: Detected lcore 18 as core 21 on socket 0 00:06:13.197 EAL: Detected lcore 19 as core 25 on socket 0 00:06:13.197 EAL: Detected lcore 20 as core 26 on socket 0 00:06:13.197 EAL: Detected lcore 21 as core 27 on socket 0 00:06:13.197 EAL: Detected lcore 22 as core 28 on socket 0 00:06:13.197 EAL: Detected lcore 23 as core 29 on socket 0 00:06:13.197 EAL: Detected lcore 24 as core 0 on socket 1 00:06:13.197 EAL: Detected lcore 25 as core 1 on socket 1 00:06:13.197 EAL: Detected lcore 26 as core 2 on socket 1 00:06:13.197 EAL: Detected lcore 27 as core 3 on socket 1 00:06:13.197 EAL: Detected lcore 28 as core 4 on socket 1 00:06:13.197 EAL: Detected lcore 29 as core 5 on socket 1 00:06:13.197 EAL: Detected lcore 30 as core 6 on socket 1 00:06:13.197 EAL: Detected lcore 31 as core 9 on socket 1 00:06:13.197 EAL: Detected lcore 32 as core 10 on socket 1 00:06:13.197 EAL: Detected lcore 33 as core 11 on socket 1 00:06:13.197 EAL: Detected lcore 34 as core 12 on socket 1 00:06:13.198 EAL: Detected lcore 35 as core 13 on socket 1 00:06:13.198 EAL: Detected lcore 36 as core 16 on socket 1 00:06:13.198 EAL: Detected lcore 37 as core 17 on socket 1 00:06:13.198 EAL: Detected lcore 38 as core 18 on socket 1 00:06:13.198 EAL: Detected lcore 39 as core 19 on socket 1 00:06:13.198 EAL: Detected lcore 40 as core 20 on socket 1 00:06:13.198 EAL: Detected lcore 41 as core 21 on socket 1 00:06:13.198 EAL: Detected lcore 42 as core 24 on socket 1 00:06:13.198 EAL: Detected lcore 43 as core 25 on socket 1 00:06:13.198 EAL: Detected lcore 44 as core 26 on socket 1 00:06:13.198 EAL: Detected lcore 45 as core 27 on socket 1 00:06:13.198 EAL: Detected lcore 46 as core 28 on socket 1 00:06:13.198 EAL: Detected lcore 47 as core 29 on socket 1 00:06:13.198 EAL: Detected lcore 48 as core 0 on socket 0 00:06:13.198 EAL: Detected lcore 49 as core 1 on socket 0 00:06:13.198 EAL: Detected lcore 50 as core 2 on socket 0 00:06:13.198 EAL: Detected lcore 51 as core 3 on socket 0 00:06:13.198 EAL: Detected lcore 52 as core 4 on socket 0 00:06:13.198 EAL: Detected lcore 53 as core 5 on socket 0 00:06:13.198 EAL: Detected lcore 54 as core 6 on socket 0 00:06:13.198 EAL: Detected lcore 55 as core 8 on socket 0 00:06:13.198 EAL: Detected lcore 56 as core 9 on socket 0 00:06:13.198 EAL: Detected lcore 57 as core 10 on socket 0 00:06:13.198 EAL: Detected lcore 58 as core 11 on socket 0 00:06:13.198 EAL: Detected lcore 59 as core 12 on socket 0 00:06:13.198 EAL: Detected lcore 60 as core 13 on socket 0 00:06:13.198 EAL: Detected lcore 61 as core 16 on socket 0 00:06:13.198 EAL: Detected lcore 62 as core 17 on socket 0 00:06:13.198 EAL: Detected lcore 63 as core 18 on socket 0 00:06:13.198 EAL: Detected lcore 64 as core 19 on socket 0 00:06:13.198 EAL: Detected lcore 65 as core 20 on socket 0 00:06:13.198 EAL: Detected lcore 66 as core 21 on socket 0 00:06:13.198 EAL: Detected lcore 67 as core 25 on socket 0 00:06:13.198 EAL: Detected lcore 68 as core 26 on socket 0 00:06:13.198 EAL: Detected lcore 69 as core 27 on socket 0 00:06:13.198 EAL: Detected lcore 70 as core 28 on socket 0 00:06:13.198 EAL: Detected lcore 71 as core 29 on socket 0 00:06:13.198 EAL: Detected lcore 72 as core 0 on socket 1 00:06:13.198 EAL: Detected lcore 73 as core 1 on socket 1 00:06:13.198 EAL: Detected lcore 74 as core 2 on socket 1 00:06:13.198 EAL: Detected lcore 75 as core 3 on socket 1 00:06:13.198 EAL: Detected lcore 76 as core 4 on socket 1 00:06:13.198 EAL: Detected lcore 77 as core 5 on socket 1 00:06:13.198 EAL: Detected lcore 78 as core 6 on socket 1 00:06:13.198 EAL: Detected lcore 79 as core 9 on socket 1 00:06:13.198 EAL: Detected lcore 80 as core 10 on socket 1 00:06:13.198 EAL: Detected lcore 81 as core 11 on socket 1 00:06:13.198 EAL: Detected lcore 82 as core 12 on socket 1 00:06:13.198 EAL: Detected lcore 83 as core 13 on socket 1 00:06:13.198 EAL: Detected lcore 84 as core 16 on socket 1 00:06:13.198 EAL: Detected lcore 85 as core 17 on socket 1 00:06:13.198 EAL: Detected lcore 86 as core 18 on socket 1 00:06:13.198 EAL: Detected lcore 87 as core 19 on socket 1 00:06:13.198 EAL: Detected lcore 88 as core 20 on socket 1 00:06:13.198 EAL: Detected lcore 89 as core 21 on socket 1 00:06:13.198 EAL: Detected lcore 90 as core 24 on socket 1 00:06:13.198 EAL: Detected lcore 91 as core 25 on socket 1 00:06:13.198 EAL: Detected lcore 92 as core 26 on socket 1 00:06:13.198 EAL: Detected lcore 93 as core 27 on socket 1 00:06:13.198 EAL: Detected lcore 94 as core 28 on socket 1 00:06:13.198 EAL: Detected lcore 95 as core 29 on socket 1 00:06:13.198 EAL: Maximum logical cores by configuration: 128 00:06:13.198 EAL: Detected CPU lcores: 96 00:06:13.198 EAL: Detected NUMA nodes: 2 00:06:13.198 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:13.198 EAL: Detected shared linkage of DPDK 00:06:13.198 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:13.198 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:13.198 EAL: Registered [vdev] bus. 00:06:13.198 EAL: bus.vdev log level changed from disabled to notice 00:06:13.198 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:13.198 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:13.198 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:13.198 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:13.198 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:13.198 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:13.198 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:13.198 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:13.198 EAL: No shared files mode enabled, IPC will be disabled 00:06:13.198 EAL: No shared files mode enabled, IPC is disabled 00:06:13.198 EAL: Bus pci wants IOVA as 'DC' 00:06:13.198 EAL: Bus vdev wants IOVA as 'DC' 00:06:13.198 EAL: Buses did not request a specific IOVA mode. 00:06:13.198 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:13.198 EAL: Selected IOVA mode 'VA' 00:06:13.198 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.198 EAL: Probing VFIO support... 00:06:13.198 EAL: IOMMU type 1 (Type 1) is supported 00:06:13.198 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:13.198 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:13.198 EAL: VFIO support initialized 00:06:13.198 EAL: Ask a virtual area of 0x2e000 bytes 00:06:13.198 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:13.198 EAL: Setting up physically contiguous memory... 00:06:13.198 EAL: Setting maximum number of open files to 524288 00:06:13.198 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:13.198 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:13.198 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:13.198 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.198 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:13.198 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.198 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.198 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:13.198 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:13.198 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.198 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:13.198 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.198 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.198 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:13.198 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:13.198 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.198 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:13.198 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.198 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.198 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:13.198 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:13.198 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.198 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:13.198 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:13.198 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.198 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:13.198 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:13.198 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:13.198 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.198 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:13.198 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:13.198 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.198 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:13.198 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:13.198 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.198 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:13.198 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:13.198 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.198 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:13.198 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:13.198 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.198 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:13.198 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:13.198 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.198 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:13.198 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:13.198 EAL: Ask a virtual area of 0x61000 bytes 00:06:13.198 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:13.198 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:13.198 EAL: Ask a virtual area of 0x400000000 bytes 00:06:13.198 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:13.198 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:13.198 EAL: Hugepages will be freed exactly as allocated. 00:06:13.198 EAL: No shared files mode enabled, IPC is disabled 00:06:13.198 EAL: No shared files mode enabled, IPC is disabled 00:06:13.198 EAL: TSC frequency is ~2300000 KHz 00:06:13.198 EAL: Main lcore 0 is ready (tid=7f23a87d0a00;cpuset=[0]) 00:06:13.198 EAL: Trying to obtain current memory policy. 00:06:13.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.198 EAL: Restoring previous memory policy: 0 00:06:13.198 EAL: request: mp_malloc_sync 00:06:13.198 EAL: No shared files mode enabled, IPC is disabled 00:06:13.198 EAL: Heap on socket 0 was expanded by 2MB 00:06:13.198 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:06:13.198 EAL: probe driver: 8086:37d2 net_i40e 00:06:13.198 EAL: Not managed by a supported kernel driver, skipped 00:06:13.198 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:06:13.198 EAL: probe driver: 8086:37d2 net_i40e 00:06:13.198 EAL: Not managed by a supported kernel driver, skipped 00:06:13.198 EAL: No shared files mode enabled, IPC is disabled 00:06:13.198 EAL: No shared files mode enabled, IPC is disabled 00:06:13.198 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:13.198 EAL: Mem event callback 'spdk:(nil)' registered 00:06:13.198 00:06:13.198 00:06:13.198 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.198 http://cunit.sourceforge.net/ 00:06:13.198 00:06:13.198 00:06:13.198 Suite: components_suite 00:06:13.198 Test: vtophys_malloc_test ...passed 00:06:13.198 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:13.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.198 EAL: Restoring previous memory policy: 4 00:06:13.198 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.198 EAL: request: mp_malloc_sync 00:06:13.198 EAL: No shared files mode enabled, IPC is disabled 00:06:13.198 EAL: Heap on socket 0 was expanded by 4MB 00:06:13.198 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.198 EAL: request: mp_malloc_sync 00:06:13.198 EAL: No shared files mode enabled, IPC is disabled 00:06:13.198 EAL: Heap on socket 0 was shrunk by 4MB 00:06:13.198 EAL: Trying to obtain current memory policy. 00:06:13.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.199 EAL: Restoring previous memory policy: 4 00:06:13.199 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.199 EAL: request: mp_malloc_sync 00:06:13.199 EAL: No shared files mode enabled, IPC is disabled 00:06:13.199 EAL: Heap on socket 0 was expanded by 6MB 00:06:13.199 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.199 EAL: request: mp_malloc_sync 00:06:13.199 EAL: No shared files mode enabled, IPC is disabled 00:06:13.199 EAL: Heap on socket 0 was shrunk by 6MB 00:06:13.199 EAL: Trying to obtain current memory policy. 00:06:13.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.199 EAL: Restoring previous memory policy: 4 00:06:13.199 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.199 EAL: request: mp_malloc_sync 00:06:13.199 EAL: No shared files mode enabled, IPC is disabled 00:06:13.199 EAL: Heap on socket 0 was expanded by 10MB 00:06:13.199 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.199 EAL: request: mp_malloc_sync 00:06:13.199 EAL: No shared files mode enabled, IPC is disabled 00:06:13.199 EAL: Heap on socket 0 was shrunk by 10MB 00:06:13.199 EAL: Trying to obtain current memory policy. 00:06:13.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.199 EAL: Restoring previous memory policy: 4 00:06:13.199 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.199 EAL: request: mp_malloc_sync 00:06:13.199 EAL: No shared files mode enabled, IPC is disabled 00:06:13.199 EAL: Heap on socket 0 was expanded by 18MB 00:06:13.199 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.199 EAL: request: mp_malloc_sync 00:06:13.199 EAL: No shared files mode enabled, IPC is disabled 00:06:13.199 EAL: Heap on socket 0 was shrunk by 18MB 00:06:13.199 EAL: Trying to obtain current memory policy. 00:06:13.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.199 EAL: Restoring previous memory policy: 4 00:06:13.199 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.199 EAL: request: mp_malloc_sync 00:06:13.199 EAL: No shared files mode enabled, IPC is disabled 00:06:13.199 EAL: Heap on socket 0 was expanded by 34MB 00:06:13.199 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.199 EAL: request: mp_malloc_sync 00:06:13.199 EAL: No shared files mode enabled, IPC is disabled 00:06:13.199 EAL: Heap on socket 0 was shrunk by 34MB 00:06:13.199 EAL: Trying to obtain current memory policy. 00:06:13.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.199 EAL: Restoring previous memory policy: 4 00:06:13.199 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.199 EAL: request: mp_malloc_sync 00:06:13.199 EAL: No shared files mode enabled, IPC is disabled 00:06:13.199 EAL: Heap on socket 0 was expanded by 66MB 00:06:13.199 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.199 EAL: request: mp_malloc_sync 00:06:13.199 EAL: No shared files mode enabled, IPC is disabled 00:06:13.199 EAL: Heap on socket 0 was shrunk by 66MB 00:06:13.199 EAL: Trying to obtain current memory policy. 00:06:13.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.199 EAL: Restoring previous memory policy: 4 00:06:13.199 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.199 EAL: request: mp_malloc_sync 00:06:13.199 EAL: No shared files mode enabled, IPC is disabled 00:06:13.199 EAL: Heap on socket 0 was expanded by 130MB 00:06:13.199 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.457 EAL: request: mp_malloc_sync 00:06:13.457 EAL: No shared files mode enabled, IPC is disabled 00:06:13.457 EAL: Heap on socket 0 was shrunk by 130MB 00:06:13.457 EAL: Trying to obtain current memory policy. 00:06:13.457 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.457 EAL: Restoring previous memory policy: 4 00:06:13.457 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.457 EAL: request: mp_malloc_sync 00:06:13.457 EAL: No shared files mode enabled, IPC is disabled 00:06:13.457 EAL: Heap on socket 0 was expanded by 258MB 00:06:13.457 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.457 EAL: request: mp_malloc_sync 00:06:13.457 EAL: No shared files mode enabled, IPC is disabled 00:06:13.457 EAL: Heap on socket 0 was shrunk by 258MB 00:06:13.457 EAL: Trying to obtain current memory policy. 00:06:13.457 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.457 EAL: Restoring previous memory policy: 4 00:06:13.457 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.457 EAL: request: mp_malloc_sync 00:06:13.457 EAL: No shared files mode enabled, IPC is disabled 00:06:13.457 EAL: Heap on socket 0 was expanded by 514MB 00:06:13.714 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.714 EAL: request: mp_malloc_sync 00:06:13.714 EAL: No shared files mode enabled, IPC is disabled 00:06:13.714 EAL: Heap on socket 0 was shrunk by 514MB 00:06:13.714 EAL: Trying to obtain current memory policy. 00:06:13.714 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:13.972 EAL: Restoring previous memory policy: 4 00:06:13.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.972 EAL: request: mp_malloc_sync 00:06:13.972 EAL: No shared files mode enabled, IPC is disabled 00:06:13.972 EAL: Heap on socket 0 was expanded by 1026MB 00:06:13.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.230 EAL: request: mp_malloc_sync 00:06:14.230 EAL: No shared files mode enabled, IPC is disabled 00:06:14.230 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:14.230 passed 00:06:14.230 00:06:14.230 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.230 suites 1 1 n/a 0 0 00:06:14.230 tests 2 2 2 0 0 00:06:14.230 asserts 497 497 497 0 n/a 00:06:14.230 00:06:14.230 Elapsed time = 0.962 seconds 00:06:14.230 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.230 EAL: request: mp_malloc_sync 00:06:14.230 EAL: No shared files mode enabled, IPC is disabled 00:06:14.230 EAL: Heap on socket 0 was shrunk by 2MB 00:06:14.230 EAL: No shared files mode enabled, IPC is disabled 00:06:14.230 EAL: No shared files mode enabled, IPC is disabled 00:06:14.230 EAL: No shared files mode enabled, IPC is disabled 00:06:14.230 00:06:14.230 real 0m1.075s 00:06:14.230 user 0m0.625s 00:06:14.230 sys 0m0.419s 00:06:14.230 13:36:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.230 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:06:14.230 ************************************ 00:06:14.230 END TEST env_vtophys 00:06:14.230 ************************************ 00:06:14.230 13:36:16 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:14.230 13:36:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:14.230 13:36:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.230 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:06:14.230 ************************************ 00:06:14.230 START TEST env_pci 00:06:14.230 ************************************ 00:06:14.230 13:36:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:14.230 00:06:14.230 00:06:14.230 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.231 http://cunit.sourceforge.net/ 00:06:14.231 00:06:14.231 00:06:14.231 Suite: pci 00:06:14.231 Test: pci_hook ...[2024-07-11 13:36:16.605636] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1416935 has claimed it 00:06:14.231 EAL: Cannot find device (10000:00:01.0) 00:06:14.231 EAL: Failed to attach device on primary process 00:06:14.231 passed 00:06:14.231 00:06:14.231 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.231 suites 1 1 n/a 0 0 00:06:14.231 tests 1 1 1 0 0 00:06:14.231 asserts 25 25 25 0 n/a 00:06:14.231 00:06:14.231 Elapsed time = 0.029 seconds 00:06:14.231 00:06:14.231 real 0m0.048s 00:06:14.231 user 0m0.018s 00:06:14.231 sys 0m0.030s 00:06:14.231 13:36:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.231 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:06:14.231 ************************************ 00:06:14.231 END TEST env_pci 00:06:14.231 ************************************ 00:06:14.231 13:36:16 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:14.231 13:36:16 -- env/env.sh@15 -- # uname 00:06:14.231 13:36:16 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:14.231 13:36:16 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:14.231 13:36:16 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:14.231 13:36:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:14.231 13:36:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.231 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:06:14.231 ************************************ 00:06:14.231 START TEST env_dpdk_post_init 00:06:14.231 ************************************ 00:06:14.231 13:36:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:14.489 EAL: Detected CPU lcores: 96 00:06:14.489 EAL: Detected NUMA nodes: 2 00:06:14.489 EAL: Detected shared linkage of DPDK 00:06:14.489 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:14.489 EAL: Selected IOVA mode 'VA' 00:06:14.489 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.489 EAL: VFIO support initialized 00:06:14.489 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:14.489 EAL: Using IOMMU type 1 (Type 1) 00:06:14.489 EAL: Ignore mapping IO port bar(1) 00:06:14.489 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:14.489 EAL: Ignore mapping IO port bar(1) 00:06:14.489 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:14.489 EAL: Ignore mapping IO port bar(1) 00:06:14.489 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:14.489 EAL: Ignore mapping IO port bar(1) 00:06:14.489 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:14.489 EAL: Ignore mapping IO port bar(1) 00:06:14.489 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:14.489 EAL: Ignore mapping IO port bar(1) 00:06:14.489 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:14.489 EAL: Ignore mapping IO port bar(1) 00:06:14.489 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:14.489 EAL: Ignore mapping IO port bar(1) 00:06:14.489 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:15.423 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:06:15.423 EAL: Ignore mapping IO port bar(1) 00:06:15.423 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:15.423 EAL: Ignore mapping IO port bar(1) 00:06:15.423 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:15.423 EAL: Ignore mapping IO port bar(1) 00:06:15.423 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:15.423 EAL: Ignore mapping IO port bar(1) 00:06:15.423 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:15.423 EAL: Ignore mapping IO port bar(1) 00:06:15.423 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:15.423 EAL: Ignore mapping IO port bar(1) 00:06:15.423 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:15.423 EAL: Ignore mapping IO port bar(1) 00:06:15.423 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:15.423 EAL: Ignore mapping IO port bar(1) 00:06:15.423 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:18.703 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:06:18.703 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:06:18.703 Starting DPDK initialization... 00:06:18.703 Starting SPDK post initialization... 00:06:18.703 SPDK NVMe probe 00:06:18.703 Attaching to 0000:5e:00.0 00:06:18.703 Attached to 0000:5e:00.0 00:06:18.703 Cleaning up... 00:06:18.703 00:06:18.703 real 0m4.329s 00:06:18.703 user 0m3.286s 00:06:18.703 sys 0m0.111s 00:06:18.703 13:36:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.703 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:06:18.703 ************************************ 00:06:18.703 END TEST env_dpdk_post_init 00:06:18.703 ************************************ 00:06:18.703 13:36:21 -- env/env.sh@26 -- # uname 00:06:18.703 13:36:21 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:18.703 13:36:21 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:18.703 13:36:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.703 13:36:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.704 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:06:18.704 ************************************ 00:06:18.704 START TEST env_mem_callbacks 00:06:18.704 ************************************ 00:06:18.704 13:36:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:18.704 EAL: Detected CPU lcores: 96 00:06:18.704 EAL: Detected NUMA nodes: 2 00:06:18.704 EAL: Detected shared linkage of DPDK 00:06:18.704 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:18.704 EAL: Selected IOVA mode 'VA' 00:06:18.704 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.704 EAL: VFIO support initialized 00:06:18.704 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:18.704 00:06:18.704 00:06:18.704 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.704 http://cunit.sourceforge.net/ 00:06:18.704 00:06:18.704 00:06:18.704 Suite: memory 00:06:18.704 Test: test ... 00:06:18.704 register 0x200000200000 2097152 00:06:18.704 malloc 3145728 00:06:18.704 register 0x200000400000 4194304 00:06:18.704 buf 0x200000500000 len 3145728 PASSED 00:06:18.704 malloc 64 00:06:18.704 buf 0x2000004fff40 len 64 PASSED 00:06:18.704 malloc 4194304 00:06:18.704 register 0x200000800000 6291456 00:06:18.704 buf 0x200000a00000 len 4194304 PASSED 00:06:18.704 free 0x200000500000 3145728 00:06:18.704 free 0x2000004fff40 64 00:06:18.704 unregister 0x200000400000 4194304 PASSED 00:06:18.704 free 0x200000a00000 4194304 00:06:18.704 unregister 0x200000800000 6291456 PASSED 00:06:18.704 malloc 8388608 00:06:18.704 register 0x200000400000 10485760 00:06:18.704 buf 0x200000600000 len 8388608 PASSED 00:06:18.704 free 0x200000600000 8388608 00:06:18.704 unregister 0x200000400000 10485760 PASSED 00:06:18.704 passed 00:06:18.704 00:06:18.704 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.704 suites 1 1 n/a 0 0 00:06:18.704 tests 1 1 1 0 0 00:06:18.704 asserts 15 15 15 0 n/a 00:06:18.704 00:06:18.704 Elapsed time = 0.005 seconds 00:06:18.704 00:06:18.704 real 0m0.052s 00:06:18.704 user 0m0.016s 00:06:18.704 sys 0m0.036s 00:06:18.704 13:36:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.704 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:06:18.704 ************************************ 00:06:18.704 END TEST env_mem_callbacks 00:06:18.704 ************************************ 00:06:18.704 00:06:18.704 real 0m5.938s 00:06:18.704 user 0m4.181s 00:06:18.704 sys 0m0.829s 00:06:18.704 13:36:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.704 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:06:18.704 ************************************ 00:06:18.704 END TEST env 00:06:18.704 ************************************ 00:06:18.963 13:36:21 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:18.963 13:36:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.963 13:36:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.963 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:06:18.963 ************************************ 00:06:18.963 START TEST rpc 00:06:18.963 ************************************ 00:06:18.963 13:36:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:18.963 * Looking for test storage... 00:06:18.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:18.963 13:36:21 -- rpc/rpc.sh@65 -- # spdk_pid=1417798 00:06:18.963 13:36:21 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:18.963 13:36:21 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.963 13:36:21 -- rpc/rpc.sh@67 -- # waitforlisten 1417798 00:06:18.963 13:36:21 -- common/autotest_common.sh@819 -- # '[' -z 1417798 ']' 00:06:18.963 13:36:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.963 13:36:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:18.963 13:36:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.963 13:36:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:18.963 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:06:18.963 [2024-07-11 13:36:21.304436] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:18.963 [2024-07-11 13:36:21.304483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1417798 ] 00:06:18.963 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.963 [2024-07-11 13:36:21.356595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.963 [2024-07-11 13:36:21.395064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.963 [2024-07-11 13:36:21.395186] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:18.963 [2024-07-11 13:36:21.395195] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1417798' to capture a snapshot of events at runtime. 00:06:18.963 [2024-07-11 13:36:21.395201] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1417798 for offline analysis/debug. 00:06:18.963 [2024-07-11 13:36:21.395224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.901 13:36:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.901 13:36:22 -- common/autotest_common.sh@852 -- # return 0 00:06:19.901 13:36:22 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:19.901 13:36:22 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:19.901 13:36:22 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:19.901 13:36:22 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:19.901 13:36:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:19.901 13:36:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.901 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.901 ************************************ 00:06:19.901 START TEST rpc_integrity 00:06:19.901 ************************************ 00:06:19.901 13:36:22 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:06:19.901 13:36:22 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:19.901 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.901 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.901 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.901 13:36:22 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:19.901 13:36:22 -- rpc/rpc.sh@13 -- # jq length 00:06:19.902 13:36:22 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:19.902 13:36:22 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:19.902 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.902 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.902 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.902 13:36:22 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:19.902 13:36:22 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:19.902 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.902 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.902 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.902 13:36:22 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:19.902 { 00:06:19.902 "name": "Malloc0", 00:06:19.902 "aliases": [ 00:06:19.902 "61cf5912-c27b-4f8a-a596-65c06fa89582" 00:06:19.902 ], 00:06:19.902 "product_name": "Malloc disk", 00:06:19.902 "block_size": 512, 00:06:19.902 "num_blocks": 16384, 00:06:19.902 "uuid": "61cf5912-c27b-4f8a-a596-65c06fa89582", 00:06:19.902 "assigned_rate_limits": { 00:06:19.902 "rw_ios_per_sec": 0, 00:06:19.902 "rw_mbytes_per_sec": 0, 00:06:19.902 "r_mbytes_per_sec": 0, 00:06:19.902 "w_mbytes_per_sec": 0 00:06:19.902 }, 00:06:19.902 "claimed": false, 00:06:19.902 "zoned": false, 00:06:19.902 "supported_io_types": { 00:06:19.902 "read": true, 00:06:19.902 "write": true, 00:06:19.902 "unmap": true, 00:06:19.902 "write_zeroes": true, 00:06:19.902 "flush": true, 00:06:19.902 "reset": true, 00:06:19.902 "compare": false, 00:06:19.902 "compare_and_write": false, 00:06:19.902 "abort": true, 00:06:19.902 "nvme_admin": false, 00:06:19.902 "nvme_io": false 00:06:19.902 }, 00:06:19.902 "memory_domains": [ 00:06:19.902 { 00:06:19.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.902 "dma_device_type": 2 00:06:19.902 } 00:06:19.902 ], 00:06:19.902 "driver_specific": {} 00:06:19.902 } 00:06:19.902 ]' 00:06:19.902 13:36:22 -- rpc/rpc.sh@17 -- # jq length 00:06:19.902 13:36:22 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:19.902 13:36:22 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:19.902 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.902 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.902 [2024-07-11 13:36:22.235211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:19.902 [2024-07-11 13:36:22.235243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:19.902 [2024-07-11 13:36:22.235255] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8b0120 00:06:19.902 [2024-07-11 13:36:22.235261] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:19.902 [2024-07-11 13:36:22.236311] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:19.902 [2024-07-11 13:36:22.236332] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:19.902 Passthru0 00:06:19.902 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.902 13:36:22 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:19.902 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.902 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.902 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.902 13:36:22 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:19.902 { 00:06:19.902 "name": "Malloc0", 00:06:19.902 "aliases": [ 00:06:19.902 "61cf5912-c27b-4f8a-a596-65c06fa89582" 00:06:19.902 ], 00:06:19.902 "product_name": "Malloc disk", 00:06:19.902 "block_size": 512, 00:06:19.902 "num_blocks": 16384, 00:06:19.902 "uuid": "61cf5912-c27b-4f8a-a596-65c06fa89582", 00:06:19.902 "assigned_rate_limits": { 00:06:19.902 "rw_ios_per_sec": 0, 00:06:19.902 "rw_mbytes_per_sec": 0, 00:06:19.902 "r_mbytes_per_sec": 0, 00:06:19.902 "w_mbytes_per_sec": 0 00:06:19.902 }, 00:06:19.902 "claimed": true, 00:06:19.902 "claim_type": "exclusive_write", 00:06:19.902 "zoned": false, 00:06:19.902 "supported_io_types": { 00:06:19.902 "read": true, 00:06:19.902 "write": true, 00:06:19.902 "unmap": true, 00:06:19.902 "write_zeroes": true, 00:06:19.902 "flush": true, 00:06:19.902 "reset": true, 00:06:19.902 "compare": false, 00:06:19.902 "compare_and_write": false, 00:06:19.902 "abort": true, 00:06:19.902 "nvme_admin": false, 00:06:19.902 "nvme_io": false 00:06:19.902 }, 00:06:19.902 "memory_domains": [ 00:06:19.902 { 00:06:19.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.902 "dma_device_type": 2 00:06:19.902 } 00:06:19.902 ], 00:06:19.902 "driver_specific": {} 00:06:19.902 }, 00:06:19.902 { 00:06:19.902 "name": "Passthru0", 00:06:19.902 "aliases": [ 00:06:19.902 "084b9845-d0f1-5a78-b964-e85d0d8b2ad1" 00:06:19.902 ], 00:06:19.902 "product_name": "passthru", 00:06:19.902 "block_size": 512, 00:06:19.902 "num_blocks": 16384, 00:06:19.902 "uuid": "084b9845-d0f1-5a78-b964-e85d0d8b2ad1", 00:06:19.902 "assigned_rate_limits": { 00:06:19.902 "rw_ios_per_sec": 0, 00:06:19.902 "rw_mbytes_per_sec": 0, 00:06:19.902 "r_mbytes_per_sec": 0, 00:06:19.902 "w_mbytes_per_sec": 0 00:06:19.902 }, 00:06:19.902 "claimed": false, 00:06:19.902 "zoned": false, 00:06:19.902 "supported_io_types": { 00:06:19.902 "read": true, 00:06:19.902 "write": true, 00:06:19.902 "unmap": true, 00:06:19.902 "write_zeroes": true, 00:06:19.902 "flush": true, 00:06:19.902 "reset": true, 00:06:19.902 "compare": false, 00:06:19.902 "compare_and_write": false, 00:06:19.902 "abort": true, 00:06:19.902 "nvme_admin": false, 00:06:19.902 "nvme_io": false 00:06:19.902 }, 00:06:19.902 "memory_domains": [ 00:06:19.902 { 00:06:19.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.902 "dma_device_type": 2 00:06:19.902 } 00:06:19.902 ], 00:06:19.902 "driver_specific": { 00:06:19.902 "passthru": { 00:06:19.902 "name": "Passthru0", 00:06:19.902 "base_bdev_name": "Malloc0" 00:06:19.902 } 00:06:19.902 } 00:06:19.902 } 00:06:19.902 ]' 00:06:19.902 13:36:22 -- rpc/rpc.sh@21 -- # jq length 00:06:19.902 13:36:22 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:19.902 13:36:22 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:19.902 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.902 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.902 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.902 13:36:22 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:19.902 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.902 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.902 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.902 13:36:22 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:19.902 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:19.902 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:19.902 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:19.902 13:36:22 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:19.903 13:36:22 -- rpc/rpc.sh@26 -- # jq length 00:06:20.231 13:36:22 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:20.232 00:06:20.232 real 0m0.259s 00:06:20.232 user 0m0.177s 00:06:20.232 sys 0m0.026s 00:06:20.232 13:36:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.232 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.232 ************************************ 00:06:20.232 END TEST rpc_integrity 00:06:20.232 ************************************ 00:06:20.232 13:36:22 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:20.232 13:36:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:20.232 13:36:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.232 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.232 ************************************ 00:06:20.232 START TEST rpc_plugins 00:06:20.232 ************************************ 00:06:20.232 13:36:22 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:06:20.232 13:36:22 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:20.232 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.232 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.232 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.232 13:36:22 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:20.232 13:36:22 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:20.232 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.232 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.232 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.232 13:36:22 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:20.232 { 00:06:20.232 "name": "Malloc1", 00:06:20.232 "aliases": [ 00:06:20.232 "9d8a5e0b-c297-4d65-9a26-77eea5b418d4" 00:06:20.232 ], 00:06:20.232 "product_name": "Malloc disk", 00:06:20.232 "block_size": 4096, 00:06:20.232 "num_blocks": 256, 00:06:20.232 "uuid": "9d8a5e0b-c297-4d65-9a26-77eea5b418d4", 00:06:20.232 "assigned_rate_limits": { 00:06:20.232 "rw_ios_per_sec": 0, 00:06:20.232 "rw_mbytes_per_sec": 0, 00:06:20.232 "r_mbytes_per_sec": 0, 00:06:20.232 "w_mbytes_per_sec": 0 00:06:20.232 }, 00:06:20.232 "claimed": false, 00:06:20.232 "zoned": false, 00:06:20.232 "supported_io_types": { 00:06:20.232 "read": true, 00:06:20.232 "write": true, 00:06:20.232 "unmap": true, 00:06:20.232 "write_zeroes": true, 00:06:20.232 "flush": true, 00:06:20.232 "reset": true, 00:06:20.232 "compare": false, 00:06:20.232 "compare_and_write": false, 00:06:20.232 "abort": true, 00:06:20.232 "nvme_admin": false, 00:06:20.232 "nvme_io": false 00:06:20.232 }, 00:06:20.232 "memory_domains": [ 00:06:20.232 { 00:06:20.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.232 "dma_device_type": 2 00:06:20.232 } 00:06:20.232 ], 00:06:20.232 "driver_specific": {} 00:06:20.232 } 00:06:20.232 ]' 00:06:20.232 13:36:22 -- rpc/rpc.sh@32 -- # jq length 00:06:20.232 13:36:22 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:20.232 13:36:22 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:20.232 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.232 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.232 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.232 13:36:22 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:20.232 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.232 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.232 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.232 13:36:22 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:20.232 13:36:22 -- rpc/rpc.sh@36 -- # jq length 00:06:20.232 13:36:22 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:20.232 00:06:20.232 real 0m0.134s 00:06:20.232 user 0m0.090s 00:06:20.232 sys 0m0.014s 00:06:20.232 13:36:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.232 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.232 ************************************ 00:06:20.232 END TEST rpc_plugins 00:06:20.232 ************************************ 00:06:20.232 13:36:22 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:20.232 13:36:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:20.232 13:36:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.232 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.232 ************************************ 00:06:20.232 START TEST rpc_trace_cmd_test 00:06:20.232 ************************************ 00:06:20.232 13:36:22 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:06:20.232 13:36:22 -- rpc/rpc.sh@40 -- # local info 00:06:20.232 13:36:22 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:20.232 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.232 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.232 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.232 13:36:22 -- rpc/rpc.sh@42 -- # info='{ 00:06:20.232 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1417798", 00:06:20.232 "tpoint_group_mask": "0x8", 00:06:20.232 "iscsi_conn": { 00:06:20.232 "mask": "0x2", 00:06:20.232 "tpoint_mask": "0x0" 00:06:20.232 }, 00:06:20.232 "scsi": { 00:06:20.232 "mask": "0x4", 00:06:20.232 "tpoint_mask": "0x0" 00:06:20.232 }, 00:06:20.232 "bdev": { 00:06:20.232 "mask": "0x8", 00:06:20.232 "tpoint_mask": "0xffffffffffffffff" 00:06:20.232 }, 00:06:20.232 "nvmf_rdma": { 00:06:20.232 "mask": "0x10", 00:06:20.232 "tpoint_mask": "0x0" 00:06:20.232 }, 00:06:20.232 "nvmf_tcp": { 00:06:20.232 "mask": "0x20", 00:06:20.232 "tpoint_mask": "0x0" 00:06:20.232 }, 00:06:20.232 "ftl": { 00:06:20.232 "mask": "0x40", 00:06:20.232 "tpoint_mask": "0x0" 00:06:20.232 }, 00:06:20.232 "blobfs": { 00:06:20.232 "mask": "0x80", 00:06:20.232 "tpoint_mask": "0x0" 00:06:20.232 }, 00:06:20.232 "dsa": { 00:06:20.232 "mask": "0x200", 00:06:20.232 "tpoint_mask": "0x0" 00:06:20.232 }, 00:06:20.232 "thread": { 00:06:20.232 "mask": "0x400", 00:06:20.232 "tpoint_mask": "0x0" 00:06:20.232 }, 00:06:20.232 "nvme_pcie": { 00:06:20.232 "mask": "0x800", 00:06:20.232 "tpoint_mask": "0x0" 00:06:20.232 }, 00:06:20.232 "iaa": { 00:06:20.232 "mask": "0x1000", 00:06:20.232 "tpoint_mask": "0x0" 00:06:20.232 }, 00:06:20.232 "nvme_tcp": { 00:06:20.232 "mask": "0x2000", 00:06:20.232 "tpoint_mask": "0x0" 00:06:20.232 }, 00:06:20.232 "bdev_nvme": { 00:06:20.232 "mask": "0x4000", 00:06:20.232 "tpoint_mask": "0x0" 00:06:20.232 } 00:06:20.232 }' 00:06:20.232 13:36:22 -- rpc/rpc.sh@43 -- # jq length 00:06:20.232 13:36:22 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:20.232 13:36:22 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:20.492 13:36:22 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:20.492 13:36:22 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:20.492 13:36:22 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:20.492 13:36:22 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:20.492 13:36:22 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:20.492 13:36:22 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:20.492 13:36:22 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:20.492 00:06:20.492 real 0m0.211s 00:06:20.492 user 0m0.185s 00:06:20.492 sys 0m0.020s 00:06:20.492 13:36:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.492 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.492 ************************************ 00:06:20.492 END TEST rpc_trace_cmd_test 00:06:20.492 ************************************ 00:06:20.492 13:36:22 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:20.492 13:36:22 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:20.492 13:36:22 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:20.492 13:36:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:20.492 13:36:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.492 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.492 ************************************ 00:06:20.492 START TEST rpc_daemon_integrity 00:06:20.492 ************************************ 00:06:20.492 13:36:22 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:06:20.492 13:36:22 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:20.492 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.492 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.492 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.492 13:36:22 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:20.492 13:36:22 -- rpc/rpc.sh@13 -- # jq length 00:06:20.492 13:36:22 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:20.492 13:36:22 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:20.492 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.492 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.492 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.492 13:36:22 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:20.492 13:36:22 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:20.492 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.492 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.492 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.492 13:36:22 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:20.492 { 00:06:20.492 "name": "Malloc2", 00:06:20.492 "aliases": [ 00:06:20.492 "765fb122-ddc8-4861-85ae-82cd0a786bf8" 00:06:20.492 ], 00:06:20.492 "product_name": "Malloc disk", 00:06:20.492 "block_size": 512, 00:06:20.492 "num_blocks": 16384, 00:06:20.492 "uuid": "765fb122-ddc8-4861-85ae-82cd0a786bf8", 00:06:20.492 "assigned_rate_limits": { 00:06:20.492 "rw_ios_per_sec": 0, 00:06:20.492 "rw_mbytes_per_sec": 0, 00:06:20.492 "r_mbytes_per_sec": 0, 00:06:20.492 "w_mbytes_per_sec": 0 00:06:20.492 }, 00:06:20.492 "claimed": false, 00:06:20.492 "zoned": false, 00:06:20.492 "supported_io_types": { 00:06:20.492 "read": true, 00:06:20.492 "write": true, 00:06:20.492 "unmap": true, 00:06:20.492 "write_zeroes": true, 00:06:20.492 "flush": true, 00:06:20.492 "reset": true, 00:06:20.492 "compare": false, 00:06:20.492 "compare_and_write": false, 00:06:20.492 "abort": true, 00:06:20.492 "nvme_admin": false, 00:06:20.492 "nvme_io": false 00:06:20.492 }, 00:06:20.492 "memory_domains": [ 00:06:20.492 { 00:06:20.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.492 "dma_device_type": 2 00:06:20.492 } 00:06:20.492 ], 00:06:20.492 "driver_specific": {} 00:06:20.492 } 00:06:20.492 ]' 00:06:20.492 13:36:22 -- rpc/rpc.sh@17 -- # jq length 00:06:20.493 13:36:22 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:20.493 13:36:22 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:20.493 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.493 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.493 [2024-07-11 13:36:22.937111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:20.493 [2024-07-11 13:36:22.937140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.493 [2024-07-11 13:36:22.937152] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8afac0 00:06:20.493 [2024-07-11 13:36:22.937163] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.493 [2024-07-11 13:36:22.938082] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.493 [2024-07-11 13:36:22.938104] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:20.493 Passthru0 00:06:20.493 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.493 13:36:22 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:20.493 13:36:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.493 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:20.752 13:36:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.752 13:36:22 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:20.752 { 00:06:20.752 "name": "Malloc2", 00:06:20.752 "aliases": [ 00:06:20.752 "765fb122-ddc8-4861-85ae-82cd0a786bf8" 00:06:20.752 ], 00:06:20.752 "product_name": "Malloc disk", 00:06:20.752 "block_size": 512, 00:06:20.752 "num_blocks": 16384, 00:06:20.752 "uuid": "765fb122-ddc8-4861-85ae-82cd0a786bf8", 00:06:20.752 "assigned_rate_limits": { 00:06:20.752 "rw_ios_per_sec": 0, 00:06:20.752 "rw_mbytes_per_sec": 0, 00:06:20.752 "r_mbytes_per_sec": 0, 00:06:20.752 "w_mbytes_per_sec": 0 00:06:20.752 }, 00:06:20.752 "claimed": true, 00:06:20.752 "claim_type": "exclusive_write", 00:06:20.752 "zoned": false, 00:06:20.752 "supported_io_types": { 00:06:20.752 "read": true, 00:06:20.752 "write": true, 00:06:20.752 "unmap": true, 00:06:20.752 "write_zeroes": true, 00:06:20.752 "flush": true, 00:06:20.752 "reset": true, 00:06:20.752 "compare": false, 00:06:20.752 "compare_and_write": false, 00:06:20.752 "abort": true, 00:06:20.752 "nvme_admin": false, 00:06:20.752 "nvme_io": false 00:06:20.752 }, 00:06:20.752 "memory_domains": [ 00:06:20.752 { 00:06:20.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.752 "dma_device_type": 2 00:06:20.752 } 00:06:20.752 ], 00:06:20.752 "driver_specific": {} 00:06:20.752 }, 00:06:20.752 { 00:06:20.752 "name": "Passthru0", 00:06:20.752 "aliases": [ 00:06:20.752 "476fff41-cff3-55b3-b7e3-7ba5008df89a" 00:06:20.752 ], 00:06:20.752 "product_name": "passthru", 00:06:20.752 "block_size": 512, 00:06:20.752 "num_blocks": 16384, 00:06:20.752 "uuid": "476fff41-cff3-55b3-b7e3-7ba5008df89a", 00:06:20.752 "assigned_rate_limits": { 00:06:20.753 "rw_ios_per_sec": 0, 00:06:20.753 "rw_mbytes_per_sec": 0, 00:06:20.753 "r_mbytes_per_sec": 0, 00:06:20.753 "w_mbytes_per_sec": 0 00:06:20.753 }, 00:06:20.753 "claimed": false, 00:06:20.753 "zoned": false, 00:06:20.753 "supported_io_types": { 00:06:20.753 "read": true, 00:06:20.753 "write": true, 00:06:20.753 "unmap": true, 00:06:20.753 "write_zeroes": true, 00:06:20.753 "flush": true, 00:06:20.753 "reset": true, 00:06:20.753 "compare": false, 00:06:20.753 "compare_and_write": false, 00:06:20.753 "abort": true, 00:06:20.753 "nvme_admin": false, 00:06:20.753 "nvme_io": false 00:06:20.753 }, 00:06:20.753 "memory_domains": [ 00:06:20.753 { 00:06:20.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.753 "dma_device_type": 2 00:06:20.753 } 00:06:20.753 ], 00:06:20.753 "driver_specific": { 00:06:20.753 "passthru": { 00:06:20.753 "name": "Passthru0", 00:06:20.753 "base_bdev_name": "Malloc2" 00:06:20.753 } 00:06:20.753 } 00:06:20.753 } 00:06:20.753 ]' 00:06:20.753 13:36:22 -- rpc/rpc.sh@21 -- # jq length 00:06:20.753 13:36:23 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:20.753 13:36:23 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:20.753 13:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.753 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:06:20.753 13:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.753 13:36:23 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:20.753 13:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.753 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:06:20.753 13:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.753 13:36:23 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:20.753 13:36:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.753 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:06:20.753 13:36:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.753 13:36:23 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:20.753 13:36:23 -- rpc/rpc.sh@26 -- # jq length 00:06:20.753 13:36:23 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:20.753 00:06:20.753 real 0m0.254s 00:06:20.753 user 0m0.168s 00:06:20.753 sys 0m0.033s 00:06:20.753 13:36:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.753 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:06:20.753 ************************************ 00:06:20.753 END TEST rpc_daemon_integrity 00:06:20.753 ************************************ 00:06:20.753 13:36:23 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:20.753 13:36:23 -- rpc/rpc.sh@84 -- # killprocess 1417798 00:06:20.753 13:36:23 -- common/autotest_common.sh@926 -- # '[' -z 1417798 ']' 00:06:20.753 13:36:23 -- common/autotest_common.sh@930 -- # kill -0 1417798 00:06:20.753 13:36:23 -- common/autotest_common.sh@931 -- # uname 00:06:20.753 13:36:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:20.753 13:36:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1417798 00:06:20.753 13:36:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:20.753 13:36:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:20.753 13:36:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1417798' 00:06:20.753 killing process with pid 1417798 00:06:20.753 13:36:23 -- common/autotest_common.sh@945 -- # kill 1417798 00:06:20.753 13:36:23 -- common/autotest_common.sh@950 -- # wait 1417798 00:06:21.012 00:06:21.012 real 0m2.273s 00:06:21.012 user 0m2.953s 00:06:21.012 sys 0m0.564s 00:06:21.012 13:36:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.012 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:06:21.012 ************************************ 00:06:21.012 END TEST rpc 00:06:21.012 ************************************ 00:06:21.272 13:36:23 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:21.272 13:36:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:21.272 13:36:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.272 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:06:21.272 ************************************ 00:06:21.272 START TEST rpc_client 00:06:21.272 ************************************ 00:06:21.272 13:36:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:21.272 * Looking for test storage... 00:06:21.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:21.272 13:36:23 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:21.272 OK 00:06:21.272 13:36:23 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:21.272 00:06:21.272 real 0m0.094s 00:06:21.272 user 0m0.042s 00:06:21.272 sys 0m0.059s 00:06:21.272 13:36:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.272 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:06:21.272 ************************************ 00:06:21.272 END TEST rpc_client 00:06:21.272 ************************************ 00:06:21.272 13:36:23 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:21.272 13:36:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:21.272 13:36:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.272 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:06:21.272 ************************************ 00:06:21.272 START TEST json_config 00:06:21.272 ************************************ 00:06:21.272 13:36:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:21.272 13:36:23 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.272 13:36:23 -- nvmf/common.sh@7 -- # uname -s 00:06:21.272 13:36:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.272 13:36:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.272 13:36:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.272 13:36:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.272 13:36:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.272 13:36:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.272 13:36:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.272 13:36:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.272 13:36:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.272 13:36:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.272 13:36:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:21.272 13:36:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:21.272 13:36:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.272 13:36:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.272 13:36:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:21.272 13:36:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.272 13:36:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.272 13:36:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.272 13:36:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.272 13:36:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.272 13:36:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.272 13:36:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.272 13:36:23 -- paths/export.sh@5 -- # export PATH 00:06:21.272 13:36:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.272 13:36:23 -- nvmf/common.sh@46 -- # : 0 00:06:21.272 13:36:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:21.272 13:36:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:21.272 13:36:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:21.272 13:36:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.272 13:36:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.272 13:36:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:21.272 13:36:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:21.272 13:36:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:21.272 13:36:23 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:21.272 13:36:23 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:21.272 13:36:23 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:21.272 13:36:23 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:21.272 13:36:23 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:06:21.272 13:36:23 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:21.272 13:36:23 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:21.272 13:36:23 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:21.272 13:36:23 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:21.272 13:36:23 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:21.272 13:36:23 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:21.272 13:36:23 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:21.272 13:36:23 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:21.272 13:36:23 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:21.272 13:36:23 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:21.272 INFO: JSON configuration test init 00:06:21.272 13:36:23 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:21.272 13:36:23 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:21.272 13:36:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:21.272 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:06:21.272 13:36:23 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:21.272 13:36:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:21.272 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:06:21.272 13:36:23 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:21.272 13:36:23 -- json_config/json_config.sh@98 -- # local app=target 00:06:21.272 13:36:23 -- json_config/json_config.sh@99 -- # shift 00:06:21.272 13:36:23 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:21.272 13:36:23 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:21.272 13:36:23 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:21.272 13:36:23 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:21.272 13:36:23 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:21.272 13:36:23 -- json_config/json_config.sh@111 -- # app_pid[$app]=1418464 00:06:21.272 13:36:23 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:21.272 Waiting for target to run... 00:06:21.272 13:36:23 -- json_config/json_config.sh@114 -- # waitforlisten 1418464 /var/tmp/spdk_tgt.sock 00:06:21.272 13:36:23 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:21.272 13:36:23 -- common/autotest_common.sh@819 -- # '[' -z 1418464 ']' 00:06:21.272 13:36:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:21.272 13:36:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.272 13:36:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:21.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:21.272 13:36:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.272 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:06:21.531 [2024-07-11 13:36:23.763968] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:21.531 [2024-07-11 13:36:23.764018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418464 ] 00:06:21.531 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.789 [2024-07-11 13:36:24.026042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.789 [2024-07-11 13:36:24.048118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.789 [2024-07-11 13:36:24.048219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.356 13:36:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.356 13:36:24 -- common/autotest_common.sh@852 -- # return 0 00:06:22.356 13:36:24 -- json_config/json_config.sh@115 -- # echo '' 00:06:22.356 00:06:22.356 13:36:24 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:22.356 13:36:24 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:22.356 13:36:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:22.356 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:06:22.356 13:36:24 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:22.356 13:36:24 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:22.356 13:36:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:22.356 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:06:22.356 13:36:24 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:22.356 13:36:24 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:22.356 13:36:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:25.646 13:36:27 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:25.646 13:36:27 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:25.646 13:36:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:25.646 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.646 13:36:27 -- json_config/json_config.sh@48 -- # local ret=0 00:06:25.646 13:36:27 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:25.646 13:36:27 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:25.646 13:36:27 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:25.646 13:36:27 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:25.646 13:36:27 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:25.646 13:36:27 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:25.646 13:36:27 -- json_config/json_config.sh@51 -- # local get_types 00:06:25.646 13:36:27 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:25.646 13:36:27 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:25.646 13:36:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:25.647 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.647 13:36:27 -- json_config/json_config.sh@58 -- # return 0 00:06:25.647 13:36:27 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:06:25.647 13:36:27 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:25.647 13:36:27 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:25.647 13:36:27 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:06:25.647 13:36:27 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:06:25.647 13:36:27 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:06:25.647 13:36:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:25.647 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:06:25.647 13:36:27 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:25.647 13:36:27 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:06:25.647 13:36:27 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:06:25.647 13:36:27 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:25.647 13:36:27 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:25.647 MallocForNvmf0 00:06:25.647 13:36:27 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.647 13:36:27 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.905 MallocForNvmf1 00:06:25.905 13:36:28 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.905 13:36:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.905 [2024-07-11 13:36:28.313662] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.905 13:36:28 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.905 13:36:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:26.164 13:36:28 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:26.164 13:36:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:26.422 13:36:28 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.422 13:36:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.422 13:36:28 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.422 13:36:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.680 [2024-07-11 13:36:28.979770] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:26.681 13:36:28 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:06:26.681 13:36:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:26.681 13:36:28 -- common/autotest_common.sh@10 -- # set +x 00:06:26.681 13:36:29 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:26.681 13:36:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:26.681 13:36:29 -- common/autotest_common.sh@10 -- # set +x 00:06:26.681 13:36:29 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:26.681 13:36:29 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.681 13:36:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.939 MallocBdevForConfigChangeCheck 00:06:26.939 13:36:29 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:26.939 13:36:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:26.939 13:36:29 -- common/autotest_common.sh@10 -- # set +x 00:06:26.939 13:36:29 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:26.939 13:36:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.198 13:36:29 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:27.198 INFO: shutting down applications... 00:06:27.198 13:36:29 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:27.198 13:36:29 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:27.198 13:36:29 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:27.198 13:36:29 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:29.104 Calling clear_iscsi_subsystem 00:06:29.104 Calling clear_nvmf_subsystem 00:06:29.104 Calling clear_nbd_subsystem 00:06:29.104 Calling clear_ublk_subsystem 00:06:29.104 Calling clear_vhost_blk_subsystem 00:06:29.104 Calling clear_vhost_scsi_subsystem 00:06:29.104 Calling clear_scheduler_subsystem 00:06:29.104 Calling clear_bdev_subsystem 00:06:29.104 Calling clear_accel_subsystem 00:06:29.104 Calling clear_vmd_subsystem 00:06:29.104 Calling clear_sock_subsystem 00:06:29.104 Calling clear_iobuf_subsystem 00:06:29.104 13:36:31 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:29.104 13:36:31 -- json_config/json_config.sh@396 -- # count=100 00:06:29.104 13:36:31 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:29.104 13:36:31 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:29.104 13:36:31 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:29.104 13:36:31 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:29.104 13:36:31 -- json_config/json_config.sh@398 -- # break 00:06:29.104 13:36:31 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:29.104 13:36:31 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:29.104 13:36:31 -- json_config/json_config.sh@120 -- # local app=target 00:06:29.104 13:36:31 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:29.104 13:36:31 -- json_config/json_config.sh@124 -- # [[ -n 1418464 ]] 00:06:29.104 13:36:31 -- json_config/json_config.sh@127 -- # kill -SIGINT 1418464 00:06:29.104 13:36:31 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:29.104 13:36:31 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:29.104 13:36:31 -- json_config/json_config.sh@130 -- # kill -0 1418464 00:06:29.104 13:36:31 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:29.671 13:36:31 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:29.671 13:36:31 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:29.671 13:36:31 -- json_config/json_config.sh@130 -- # kill -0 1418464 00:06:29.671 13:36:31 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:29.671 13:36:31 -- json_config/json_config.sh@132 -- # break 00:06:29.671 13:36:31 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:29.671 13:36:31 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:29.671 SPDK target shutdown done 00:06:29.671 13:36:31 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:29.671 INFO: relaunching applications... 00:06:29.671 13:36:31 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.671 13:36:31 -- json_config/json_config.sh@98 -- # local app=target 00:06:29.671 13:36:31 -- json_config/json_config.sh@99 -- # shift 00:06:29.671 13:36:31 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:29.671 13:36:31 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:29.671 13:36:31 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:29.671 13:36:31 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:29.671 13:36:31 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:29.671 13:36:31 -- json_config/json_config.sh@111 -- # app_pid[$app]=1419991 00:06:29.671 13:36:31 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:29.671 Waiting for target to run... 00:06:29.671 13:36:31 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.672 13:36:31 -- json_config/json_config.sh@114 -- # waitforlisten 1419991 /var/tmp/spdk_tgt.sock 00:06:29.672 13:36:31 -- common/autotest_common.sh@819 -- # '[' -z 1419991 ']' 00:06:29.672 13:36:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:29.672 13:36:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.672 13:36:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:29.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:29.672 13:36:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.672 13:36:31 -- common/autotest_common.sh@10 -- # set +x 00:06:29.672 [2024-07-11 13:36:31.958090] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:29.672 [2024-07-11 13:36:31.958145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419991 ] 00:06:29.672 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.240 [2024-07-11 13:36:32.393467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.240 [2024-07-11 13:36:32.424757] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.240 [2024-07-11 13:36:32.424863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.526 [2024-07-11 13:36:35.403752] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.526 [2024-07-11 13:36:35.436078] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:33.785 13:36:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.785 13:36:36 -- common/autotest_common.sh@852 -- # return 0 00:06:33.785 13:36:36 -- json_config/json_config.sh@115 -- # echo '' 00:06:33.785 00:06:33.785 13:36:36 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:33.785 13:36:36 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:33.785 INFO: Checking if target configuration is the same... 00:06:33.785 13:36:36 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.785 13:36:36 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:33.785 13:36:36 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.785 + '[' 2 -ne 2 ']' 00:06:33.785 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:33.785 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:33.785 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.785 +++ basename /dev/fd/62 00:06:33.785 ++ mktemp /tmp/62.XXX 00:06:33.785 + tmp_file_1=/tmp/62.swY 00:06:33.785 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.785 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:33.785 + tmp_file_2=/tmp/spdk_tgt_config.json.U1X 00:06:33.785 + ret=0 00:06:33.785 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.044 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.044 + diff -u /tmp/62.swY /tmp/spdk_tgt_config.json.U1X 00:06:34.044 + echo 'INFO: JSON config files are the same' 00:06:34.044 INFO: JSON config files are the same 00:06:34.044 + rm /tmp/62.swY /tmp/spdk_tgt_config.json.U1X 00:06:34.044 + exit 0 00:06:34.044 13:36:36 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:34.044 13:36:36 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:34.044 INFO: changing configuration and checking if this can be detected... 00:06:34.044 13:36:36 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.044 13:36:36 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.302 13:36:36 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.302 13:36:36 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:34.302 13:36:36 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.302 + '[' 2 -ne 2 ']' 00:06:34.302 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:34.302 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:34.302 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:34.302 +++ basename /dev/fd/62 00:06:34.302 ++ mktemp /tmp/62.XXX 00:06:34.302 + tmp_file_1=/tmp/62.65n 00:06:34.302 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.302 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:34.302 + tmp_file_2=/tmp/spdk_tgt_config.json.t7T 00:06:34.302 + ret=0 00:06:34.302 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.562 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.562 + diff -u /tmp/62.65n /tmp/spdk_tgt_config.json.t7T 00:06:34.562 + ret=1 00:06:34.562 + echo '=== Start of file: /tmp/62.65n ===' 00:06:34.562 + cat /tmp/62.65n 00:06:34.562 + echo '=== End of file: /tmp/62.65n ===' 00:06:34.562 + echo '' 00:06:34.562 + echo '=== Start of file: /tmp/spdk_tgt_config.json.t7T ===' 00:06:34.562 + cat /tmp/spdk_tgt_config.json.t7T 00:06:34.562 + echo '=== End of file: /tmp/spdk_tgt_config.json.t7T ===' 00:06:34.562 + echo '' 00:06:34.562 + rm /tmp/62.65n /tmp/spdk_tgt_config.json.t7T 00:06:34.562 + exit 1 00:06:34.562 13:36:36 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:34.562 INFO: configuration change detected. 00:06:34.562 13:36:36 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:34.562 13:36:36 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:34.562 13:36:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:34.562 13:36:36 -- common/autotest_common.sh@10 -- # set +x 00:06:34.562 13:36:36 -- json_config/json_config.sh@360 -- # local ret=0 00:06:34.562 13:36:36 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:34.562 13:36:36 -- json_config/json_config.sh@370 -- # [[ -n 1419991 ]] 00:06:34.562 13:36:36 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:34.562 13:36:36 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:34.562 13:36:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:34.562 13:36:36 -- common/autotest_common.sh@10 -- # set +x 00:06:34.562 13:36:36 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:34.562 13:36:36 -- json_config/json_config.sh@246 -- # uname -s 00:06:34.562 13:36:36 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:34.562 13:36:36 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:34.562 13:36:36 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:34.562 13:36:36 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:34.562 13:36:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:34.562 13:36:36 -- common/autotest_common.sh@10 -- # set +x 00:06:34.562 13:36:36 -- json_config/json_config.sh@376 -- # killprocess 1419991 00:06:34.562 13:36:36 -- common/autotest_common.sh@926 -- # '[' -z 1419991 ']' 00:06:34.562 13:36:36 -- common/autotest_common.sh@930 -- # kill -0 1419991 00:06:34.562 13:36:36 -- common/autotest_common.sh@931 -- # uname 00:06:34.562 13:36:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.562 13:36:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1419991 00:06:34.562 13:36:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.562 13:36:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.562 13:36:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1419991' 00:06:34.562 killing process with pid 1419991 00:06:34.562 13:36:36 -- common/autotest_common.sh@945 -- # kill 1419991 00:06:34.562 13:36:36 -- common/autotest_common.sh@950 -- # wait 1419991 00:06:36.468 13:36:38 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.468 13:36:38 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:36.468 13:36:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:36.468 13:36:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.468 13:36:38 -- json_config/json_config.sh@381 -- # return 0 00:06:36.468 13:36:38 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:36.468 INFO: Success 00:06:36.468 00:06:36.468 real 0m14.871s 00:06:36.468 user 0m15.877s 00:06:36.468 sys 0m1.908s 00:06:36.468 13:36:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.468 13:36:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.468 ************************************ 00:06:36.468 END TEST json_config 00:06:36.468 ************************************ 00:06:36.468 13:36:38 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:36.468 13:36:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.468 13:36:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.468 13:36:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.468 ************************************ 00:06:36.468 START TEST json_config_extra_key 00:06:36.468 ************************************ 00:06:36.468 13:36:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:36.468 13:36:38 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.468 13:36:38 -- nvmf/common.sh@7 -- # uname -s 00:06:36.468 13:36:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.468 13:36:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.468 13:36:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.468 13:36:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.468 13:36:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.468 13:36:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.468 13:36:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.468 13:36:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.468 13:36:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.468 13:36:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.468 13:36:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:36.468 13:36:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:36.468 13:36:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.468 13:36:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.468 13:36:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:36.469 13:36:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.469 13:36:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.469 13:36:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.469 13:36:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.469 13:36:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.469 13:36:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.469 13:36:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.469 13:36:38 -- paths/export.sh@5 -- # export PATH 00:06:36.469 13:36:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.469 13:36:38 -- nvmf/common.sh@46 -- # : 0 00:06:36.469 13:36:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:36.469 13:36:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:36.469 13:36:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:36.469 13:36:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.469 13:36:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.469 13:36:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:36.469 13:36:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:36.469 13:36:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:36.469 INFO: launching applications... 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1421286 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:36.469 Waiting for target to run... 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1421286 /var/tmp/spdk_tgt.sock 00:06:36.469 13:36:38 -- common/autotest_common.sh@819 -- # '[' -z 1421286 ']' 00:06:36.469 13:36:38 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:36.469 13:36:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:36.469 13:36:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.469 13:36:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:36.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:36.469 13:36:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.469 13:36:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.469 [2024-07-11 13:36:38.669434] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:36.469 [2024-07-11 13:36:38.669487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421286 ] 00:06:36.469 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.728 [2024-07-11 13:36:39.097526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.728 [2024-07-11 13:36:39.129021] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.728 [2024-07-11 13:36:39.129124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.987 13:36:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.987 13:36:39 -- common/autotest_common.sh@852 -- # return 0 00:06:36.987 13:36:39 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:36.987 00:06:37.247 13:36:39 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:37.247 INFO: shutting down applications... 00:06:37.247 13:36:39 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:37.247 13:36:39 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:37.247 13:36:39 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:37.247 13:36:39 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1421286 ]] 00:06:37.247 13:36:39 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1421286 00:06:37.247 13:36:39 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:37.247 13:36:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:37.247 13:36:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1421286 00:06:37.247 13:36:39 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:37.506 13:36:39 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:37.506 13:36:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:37.506 13:36:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1421286 00:06:37.506 13:36:39 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:37.506 13:36:39 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:37.506 13:36:39 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:37.506 13:36:39 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:37.506 SPDK target shutdown done 00:06:37.506 13:36:39 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:37.506 Success 00:06:37.506 00:06:37.506 real 0m1.420s 00:06:37.506 user 0m1.025s 00:06:37.506 sys 0m0.518s 00:06:37.506 13:36:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.506 13:36:39 -- common/autotest_common.sh@10 -- # set +x 00:06:37.506 ************************************ 00:06:37.506 END TEST json_config_extra_key 00:06:37.506 ************************************ 00:06:37.765 13:36:39 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.765 13:36:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.765 13:36:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.765 13:36:39 -- common/autotest_common.sh@10 -- # set +x 00:06:37.765 ************************************ 00:06:37.765 START TEST alias_rpc 00:06:37.765 ************************************ 00:06:37.765 13:36:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.765 * Looking for test storage... 00:06:37.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:37.765 13:36:40 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.765 13:36:40 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1421562 00:06:37.765 13:36:40 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.765 13:36:40 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1421562 00:06:37.765 13:36:40 -- common/autotest_common.sh@819 -- # '[' -z 1421562 ']' 00:06:37.765 13:36:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.765 13:36:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.765 13:36:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.766 13:36:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.766 13:36:40 -- common/autotest_common.sh@10 -- # set +x 00:06:37.766 [2024-07-11 13:36:40.115578] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:37.766 [2024-07-11 13:36:40.115635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421562 ] 00:06:37.766 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.766 [2024-07-11 13:36:40.168951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.766 [2024-07-11 13:36:40.208586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.766 [2024-07-11 13:36:40.208705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.772 13:36:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.772 13:36:40 -- common/autotest_common.sh@852 -- # return 0 00:06:38.772 13:36:40 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:38.772 13:36:41 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1421562 00:06:38.772 13:36:41 -- common/autotest_common.sh@926 -- # '[' -z 1421562 ']' 00:06:38.772 13:36:41 -- common/autotest_common.sh@930 -- # kill -0 1421562 00:06:38.772 13:36:41 -- common/autotest_common.sh@931 -- # uname 00:06:38.772 13:36:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:38.772 13:36:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1421562 00:06:38.772 13:36:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:38.772 13:36:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:38.772 13:36:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1421562' 00:06:38.772 killing process with pid 1421562 00:06:38.772 13:36:41 -- common/autotest_common.sh@945 -- # kill 1421562 00:06:38.772 13:36:41 -- common/autotest_common.sh@950 -- # wait 1421562 00:06:39.031 00:06:39.031 real 0m1.443s 00:06:39.031 user 0m1.590s 00:06:39.031 sys 0m0.366s 00:06:39.031 13:36:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.031 13:36:41 -- common/autotest_common.sh@10 -- # set +x 00:06:39.031 ************************************ 00:06:39.031 END TEST alias_rpc 00:06:39.031 ************************************ 00:06:39.031 13:36:41 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:06:39.031 13:36:41 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:39.031 13:36:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.031 13:36:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.031 13:36:41 -- common/autotest_common.sh@10 -- # set +x 00:06:39.031 ************************************ 00:06:39.031 START TEST spdkcli_tcp 00:06:39.031 ************************************ 00:06:39.031 13:36:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:39.291 * Looking for test storage... 00:06:39.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:39.291 13:36:41 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:39.291 13:36:41 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:39.291 13:36:41 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:39.291 13:36:41 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:39.291 13:36:41 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:39.291 13:36:41 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:39.291 13:36:41 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:39.291 13:36:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:39.291 13:36:41 -- common/autotest_common.sh@10 -- # set +x 00:06:39.291 13:36:41 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1421858 00:06:39.291 13:36:41 -- spdkcli/tcp.sh@27 -- # waitforlisten 1421858 00:06:39.291 13:36:41 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:39.291 13:36:41 -- common/autotest_common.sh@819 -- # '[' -z 1421858 ']' 00:06:39.291 13:36:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.291 13:36:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.291 13:36:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.291 13:36:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.291 13:36:41 -- common/autotest_common.sh@10 -- # set +x 00:06:39.291 [2024-07-11 13:36:41.608803] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:39.291 [2024-07-11 13:36:41.608855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1421858 ] 00:06:39.291 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.291 [2024-07-11 13:36:41.664273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.291 [2024-07-11 13:36:41.703112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.291 [2024-07-11 13:36:41.703278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.291 [2024-07-11 13:36:41.703281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.229 13:36:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.229 13:36:42 -- common/autotest_common.sh@852 -- # return 0 00:06:40.229 13:36:42 -- spdkcli/tcp.sh@31 -- # socat_pid=1422019 00:06:40.229 13:36:42 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:40.229 13:36:42 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:40.229 [ 00:06:40.229 "bdev_malloc_delete", 00:06:40.229 "bdev_malloc_create", 00:06:40.229 "bdev_null_resize", 00:06:40.229 "bdev_null_delete", 00:06:40.229 "bdev_null_create", 00:06:40.229 "bdev_nvme_cuse_unregister", 00:06:40.229 "bdev_nvme_cuse_register", 00:06:40.229 "bdev_opal_new_user", 00:06:40.229 "bdev_opal_set_lock_state", 00:06:40.229 "bdev_opal_delete", 00:06:40.229 "bdev_opal_get_info", 00:06:40.229 "bdev_opal_create", 00:06:40.229 "bdev_nvme_opal_revert", 00:06:40.229 "bdev_nvme_opal_init", 00:06:40.229 "bdev_nvme_send_cmd", 00:06:40.229 "bdev_nvme_get_path_iostat", 00:06:40.229 "bdev_nvme_get_mdns_discovery_info", 00:06:40.229 "bdev_nvme_stop_mdns_discovery", 00:06:40.229 "bdev_nvme_start_mdns_discovery", 00:06:40.229 "bdev_nvme_set_multipath_policy", 00:06:40.229 "bdev_nvme_set_preferred_path", 00:06:40.229 "bdev_nvme_get_io_paths", 00:06:40.229 "bdev_nvme_remove_error_injection", 00:06:40.229 "bdev_nvme_add_error_injection", 00:06:40.229 "bdev_nvme_get_discovery_info", 00:06:40.229 "bdev_nvme_stop_discovery", 00:06:40.229 "bdev_nvme_start_discovery", 00:06:40.229 "bdev_nvme_get_controller_health_info", 00:06:40.229 "bdev_nvme_disable_controller", 00:06:40.229 "bdev_nvme_enable_controller", 00:06:40.229 "bdev_nvme_reset_controller", 00:06:40.229 "bdev_nvme_get_transport_statistics", 00:06:40.229 "bdev_nvme_apply_firmware", 00:06:40.229 "bdev_nvme_detach_controller", 00:06:40.229 "bdev_nvme_get_controllers", 00:06:40.229 "bdev_nvme_attach_controller", 00:06:40.229 "bdev_nvme_set_hotplug", 00:06:40.229 "bdev_nvme_set_options", 00:06:40.229 "bdev_passthru_delete", 00:06:40.229 "bdev_passthru_create", 00:06:40.229 "bdev_lvol_grow_lvstore", 00:06:40.229 "bdev_lvol_get_lvols", 00:06:40.229 "bdev_lvol_get_lvstores", 00:06:40.229 "bdev_lvol_delete", 00:06:40.229 "bdev_lvol_set_read_only", 00:06:40.229 "bdev_lvol_resize", 00:06:40.229 "bdev_lvol_decouple_parent", 00:06:40.229 "bdev_lvol_inflate", 00:06:40.229 "bdev_lvol_rename", 00:06:40.229 "bdev_lvol_clone_bdev", 00:06:40.230 "bdev_lvol_clone", 00:06:40.230 "bdev_lvol_snapshot", 00:06:40.230 "bdev_lvol_create", 00:06:40.230 "bdev_lvol_delete_lvstore", 00:06:40.230 "bdev_lvol_rename_lvstore", 00:06:40.230 "bdev_lvol_create_lvstore", 00:06:40.230 "bdev_raid_set_options", 00:06:40.230 "bdev_raid_remove_base_bdev", 00:06:40.230 "bdev_raid_add_base_bdev", 00:06:40.230 "bdev_raid_delete", 00:06:40.230 "bdev_raid_create", 00:06:40.230 "bdev_raid_get_bdevs", 00:06:40.230 "bdev_error_inject_error", 00:06:40.230 "bdev_error_delete", 00:06:40.230 "bdev_error_create", 00:06:40.230 "bdev_split_delete", 00:06:40.230 "bdev_split_create", 00:06:40.230 "bdev_delay_delete", 00:06:40.230 "bdev_delay_create", 00:06:40.230 "bdev_delay_update_latency", 00:06:40.230 "bdev_zone_block_delete", 00:06:40.230 "bdev_zone_block_create", 00:06:40.230 "blobfs_create", 00:06:40.230 "blobfs_detect", 00:06:40.230 "blobfs_set_cache_size", 00:06:40.230 "bdev_aio_delete", 00:06:40.230 "bdev_aio_rescan", 00:06:40.230 "bdev_aio_create", 00:06:40.230 "bdev_ftl_set_property", 00:06:40.230 "bdev_ftl_get_properties", 00:06:40.230 "bdev_ftl_get_stats", 00:06:40.230 "bdev_ftl_unmap", 00:06:40.230 "bdev_ftl_unload", 00:06:40.230 "bdev_ftl_delete", 00:06:40.230 "bdev_ftl_load", 00:06:40.230 "bdev_ftl_create", 00:06:40.230 "bdev_virtio_attach_controller", 00:06:40.230 "bdev_virtio_scsi_get_devices", 00:06:40.230 "bdev_virtio_detach_controller", 00:06:40.230 "bdev_virtio_blk_set_hotplug", 00:06:40.230 "bdev_iscsi_delete", 00:06:40.230 "bdev_iscsi_create", 00:06:40.230 "bdev_iscsi_set_options", 00:06:40.230 "accel_error_inject_error", 00:06:40.230 "ioat_scan_accel_module", 00:06:40.230 "dsa_scan_accel_module", 00:06:40.230 "iaa_scan_accel_module", 00:06:40.230 "vfu_virtio_create_scsi_endpoint", 00:06:40.230 "vfu_virtio_scsi_remove_target", 00:06:40.230 "vfu_virtio_scsi_add_target", 00:06:40.230 "vfu_virtio_create_blk_endpoint", 00:06:40.230 "vfu_virtio_delete_endpoint", 00:06:40.230 "iscsi_set_options", 00:06:40.230 "iscsi_get_auth_groups", 00:06:40.230 "iscsi_auth_group_remove_secret", 00:06:40.230 "iscsi_auth_group_add_secret", 00:06:40.230 "iscsi_delete_auth_group", 00:06:40.230 "iscsi_create_auth_group", 00:06:40.230 "iscsi_set_discovery_auth", 00:06:40.230 "iscsi_get_options", 00:06:40.230 "iscsi_target_node_request_logout", 00:06:40.230 "iscsi_target_node_set_redirect", 00:06:40.230 "iscsi_target_node_set_auth", 00:06:40.230 "iscsi_target_node_add_lun", 00:06:40.230 "iscsi_get_connections", 00:06:40.230 "iscsi_portal_group_set_auth", 00:06:40.230 "iscsi_start_portal_group", 00:06:40.230 "iscsi_delete_portal_group", 00:06:40.230 "iscsi_create_portal_group", 00:06:40.230 "iscsi_get_portal_groups", 00:06:40.230 "iscsi_delete_target_node", 00:06:40.230 "iscsi_target_node_remove_pg_ig_maps", 00:06:40.230 "iscsi_target_node_add_pg_ig_maps", 00:06:40.230 "iscsi_create_target_node", 00:06:40.230 "iscsi_get_target_nodes", 00:06:40.230 "iscsi_delete_initiator_group", 00:06:40.230 "iscsi_initiator_group_remove_initiators", 00:06:40.230 "iscsi_initiator_group_add_initiators", 00:06:40.230 "iscsi_create_initiator_group", 00:06:40.230 "iscsi_get_initiator_groups", 00:06:40.230 "nvmf_set_crdt", 00:06:40.230 "nvmf_set_config", 00:06:40.230 "nvmf_set_max_subsystems", 00:06:40.230 "nvmf_subsystem_get_listeners", 00:06:40.230 "nvmf_subsystem_get_qpairs", 00:06:40.230 "nvmf_subsystem_get_controllers", 00:06:40.230 "nvmf_get_stats", 00:06:40.230 "nvmf_get_transports", 00:06:40.230 "nvmf_create_transport", 00:06:40.230 "nvmf_get_targets", 00:06:40.230 "nvmf_delete_target", 00:06:40.230 "nvmf_create_target", 00:06:40.230 "nvmf_subsystem_allow_any_host", 00:06:40.230 "nvmf_subsystem_remove_host", 00:06:40.230 "nvmf_subsystem_add_host", 00:06:40.230 "nvmf_subsystem_remove_ns", 00:06:40.230 "nvmf_subsystem_add_ns", 00:06:40.230 "nvmf_subsystem_listener_set_ana_state", 00:06:40.230 "nvmf_discovery_get_referrals", 00:06:40.230 "nvmf_discovery_remove_referral", 00:06:40.230 "nvmf_discovery_add_referral", 00:06:40.230 "nvmf_subsystem_remove_listener", 00:06:40.230 "nvmf_subsystem_add_listener", 00:06:40.230 "nvmf_delete_subsystem", 00:06:40.230 "nvmf_create_subsystem", 00:06:40.230 "nvmf_get_subsystems", 00:06:40.230 "env_dpdk_get_mem_stats", 00:06:40.230 "nbd_get_disks", 00:06:40.230 "nbd_stop_disk", 00:06:40.230 "nbd_start_disk", 00:06:40.230 "ublk_recover_disk", 00:06:40.230 "ublk_get_disks", 00:06:40.230 "ublk_stop_disk", 00:06:40.230 "ublk_start_disk", 00:06:40.230 "ublk_destroy_target", 00:06:40.230 "ublk_create_target", 00:06:40.230 "virtio_blk_create_transport", 00:06:40.230 "virtio_blk_get_transports", 00:06:40.230 "vhost_controller_set_coalescing", 00:06:40.230 "vhost_get_controllers", 00:06:40.230 "vhost_delete_controller", 00:06:40.230 "vhost_create_blk_controller", 00:06:40.230 "vhost_scsi_controller_remove_target", 00:06:40.230 "vhost_scsi_controller_add_target", 00:06:40.230 "vhost_start_scsi_controller", 00:06:40.230 "vhost_create_scsi_controller", 00:06:40.230 "thread_set_cpumask", 00:06:40.230 "framework_get_scheduler", 00:06:40.230 "framework_set_scheduler", 00:06:40.230 "framework_get_reactors", 00:06:40.230 "thread_get_io_channels", 00:06:40.230 "thread_get_pollers", 00:06:40.230 "thread_get_stats", 00:06:40.230 "framework_monitor_context_switch", 00:06:40.230 "spdk_kill_instance", 00:06:40.230 "log_enable_timestamps", 00:06:40.230 "log_get_flags", 00:06:40.230 "log_clear_flag", 00:06:40.230 "log_set_flag", 00:06:40.230 "log_get_level", 00:06:40.230 "log_set_level", 00:06:40.230 "log_get_print_level", 00:06:40.230 "log_set_print_level", 00:06:40.230 "framework_enable_cpumask_locks", 00:06:40.230 "framework_disable_cpumask_locks", 00:06:40.230 "framework_wait_init", 00:06:40.230 "framework_start_init", 00:06:40.230 "scsi_get_devices", 00:06:40.230 "bdev_get_histogram", 00:06:40.230 "bdev_enable_histogram", 00:06:40.230 "bdev_set_qos_limit", 00:06:40.230 "bdev_set_qd_sampling_period", 00:06:40.230 "bdev_get_bdevs", 00:06:40.230 "bdev_reset_iostat", 00:06:40.230 "bdev_get_iostat", 00:06:40.230 "bdev_examine", 00:06:40.230 "bdev_wait_for_examine", 00:06:40.230 "bdev_set_options", 00:06:40.230 "notify_get_notifications", 00:06:40.230 "notify_get_types", 00:06:40.230 "accel_get_stats", 00:06:40.230 "accel_set_options", 00:06:40.230 "accel_set_driver", 00:06:40.230 "accel_crypto_key_destroy", 00:06:40.230 "accel_crypto_keys_get", 00:06:40.230 "accel_crypto_key_create", 00:06:40.230 "accel_assign_opc", 00:06:40.230 "accel_get_module_info", 00:06:40.230 "accel_get_opc_assignments", 00:06:40.230 "vmd_rescan", 00:06:40.230 "vmd_remove_device", 00:06:40.230 "vmd_enable", 00:06:40.230 "sock_set_default_impl", 00:06:40.230 "sock_impl_set_options", 00:06:40.230 "sock_impl_get_options", 00:06:40.230 "iobuf_get_stats", 00:06:40.230 "iobuf_set_options", 00:06:40.230 "framework_get_pci_devices", 00:06:40.230 "framework_get_config", 00:06:40.230 "framework_get_subsystems", 00:06:40.230 "vfu_tgt_set_base_path", 00:06:40.230 "trace_get_info", 00:06:40.230 "trace_get_tpoint_group_mask", 00:06:40.230 "trace_disable_tpoint_group", 00:06:40.230 "trace_enable_tpoint_group", 00:06:40.230 "trace_clear_tpoint_mask", 00:06:40.230 "trace_set_tpoint_mask", 00:06:40.230 "spdk_get_version", 00:06:40.230 "rpc_get_methods" 00:06:40.230 ] 00:06:40.230 13:36:42 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:40.230 13:36:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:40.230 13:36:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.230 13:36:42 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:40.230 13:36:42 -- spdkcli/tcp.sh@38 -- # killprocess 1421858 00:06:40.230 13:36:42 -- common/autotest_common.sh@926 -- # '[' -z 1421858 ']' 00:06:40.230 13:36:42 -- common/autotest_common.sh@930 -- # kill -0 1421858 00:06:40.230 13:36:42 -- common/autotest_common.sh@931 -- # uname 00:06:40.230 13:36:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:40.230 13:36:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1421858 00:06:40.230 13:36:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:40.230 13:36:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:40.230 13:36:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1421858' 00:06:40.230 killing process with pid 1421858 00:06:40.230 13:36:42 -- common/autotest_common.sh@945 -- # kill 1421858 00:06:40.230 13:36:42 -- common/autotest_common.sh@950 -- # wait 1421858 00:06:40.798 00:06:40.798 real 0m1.483s 00:06:40.798 user 0m2.807s 00:06:40.798 sys 0m0.423s 00:06:40.798 13:36:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.798 13:36:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.798 ************************************ 00:06:40.798 END TEST spdkcli_tcp 00:06:40.798 ************************************ 00:06:40.798 13:36:42 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.798 13:36:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.798 13:36:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.798 13:36:42 -- common/autotest_common.sh@10 -- # set +x 00:06:40.798 ************************************ 00:06:40.798 START TEST dpdk_mem_utility 00:06:40.798 ************************************ 00:06:40.798 13:36:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.798 * Looking for test storage... 00:06:40.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:40.798 13:36:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:40.798 13:36:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1422159 00:06:40.798 13:36:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1422159 00:06:40.798 13:36:43 -- common/autotest_common.sh@819 -- # '[' -z 1422159 ']' 00:06:40.798 13:36:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.798 13:36:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.798 13:36:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.798 13:36:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.798 13:36:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.798 13:36:43 -- common/autotest_common.sh@10 -- # set +x 00:06:40.798 [2024-07-11 13:36:43.111946] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:40.798 [2024-07-11 13:36:43.111998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422159 ] 00:06:40.798 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.798 [2024-07-11 13:36:43.166591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.798 [2024-07-11 13:36:43.204931] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.798 [2024-07-11 13:36:43.205051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.796 13:36:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.796 13:36:43 -- common/autotest_common.sh@852 -- # return 0 00:06:41.796 13:36:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:41.796 13:36:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:41.796 13:36:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.796 13:36:43 -- common/autotest_common.sh@10 -- # set +x 00:06:41.796 { 00:06:41.796 "filename": "/tmp/spdk_mem_dump.txt" 00:06:41.796 } 00:06:41.796 13:36:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.796 13:36:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:41.796 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:41.796 1 heaps totaling size 814.000000 MiB 00:06:41.796 size: 814.000000 MiB heap id: 0 00:06:41.796 end heaps---------- 00:06:41.796 8 mempools totaling size 598.116089 MiB 00:06:41.796 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:41.796 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:41.796 size: 84.521057 MiB name: bdev_io_1422159 00:06:41.797 size: 51.011292 MiB name: evtpool_1422159 00:06:41.797 size: 50.003479 MiB name: msgpool_1422159 00:06:41.797 size: 21.763794 MiB name: PDU_Pool 00:06:41.797 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:41.797 size: 0.026123 MiB name: Session_Pool 00:06:41.797 end mempools------- 00:06:41.797 6 memzones totaling size 4.142822 MiB 00:06:41.797 size: 1.000366 MiB name: RG_ring_0_1422159 00:06:41.797 size: 1.000366 MiB name: RG_ring_1_1422159 00:06:41.797 size: 1.000366 MiB name: RG_ring_4_1422159 00:06:41.797 size: 1.000366 MiB name: RG_ring_5_1422159 00:06:41.797 size: 0.125366 MiB name: RG_ring_2_1422159 00:06:41.797 size: 0.015991 MiB name: RG_ring_3_1422159 00:06:41.797 end memzones------- 00:06:41.797 13:36:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:41.797 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:41.797 list of free elements. size: 12.519348 MiB 00:06:41.797 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:41.797 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:41.797 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:41.797 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:41.797 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:41.797 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:41.797 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:41.797 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:41.797 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:41.797 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:41.797 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:41.797 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:41.797 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:41.797 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:41.797 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:41.797 list of standard malloc elements. size: 199.218079 MiB 00:06:41.797 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:41.797 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:41.797 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:41.797 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:41.797 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:41.797 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:41.797 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:41.797 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:41.797 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:41.797 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:41.797 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:41.797 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:41.797 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:41.797 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:41.797 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:41.797 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:41.797 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:41.797 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:41.797 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:41.797 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:41.797 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:41.797 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:41.797 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:41.797 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:41.797 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:41.797 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:41.797 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:41.797 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:41.797 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:41.797 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:41.797 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:41.797 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:41.797 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:41.797 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:41.797 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:41.797 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:41.797 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:41.797 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:41.797 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:41.797 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:41.797 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:41.797 list of memzone associated elements. size: 602.262573 MiB 00:06:41.797 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:41.797 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:41.797 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:41.797 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:41.797 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:41.797 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1422159_0 00:06:41.797 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:41.797 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1422159_0 00:06:41.797 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:41.797 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1422159_0 00:06:41.797 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:41.797 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:41.797 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:41.797 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:41.797 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:41.797 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1422159 00:06:41.797 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:41.797 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1422159 00:06:41.797 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:41.797 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1422159 00:06:41.797 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:41.797 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:41.797 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:41.797 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:41.797 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:41.797 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:41.797 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:41.797 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:41.797 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:41.797 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1422159 00:06:41.797 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:41.797 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1422159 00:06:41.797 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:41.797 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1422159 00:06:41.797 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:41.797 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1422159 00:06:41.797 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:41.797 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1422159 00:06:41.797 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:41.797 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:41.797 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:41.797 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:41.797 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:41.797 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:41.797 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:41.797 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1422159 00:06:41.797 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:41.797 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:41.797 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:41.797 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:41.797 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:41.797 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1422159 00:06:41.797 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:41.797 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:41.797 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:41.797 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1422159 00:06:41.797 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:41.797 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1422159 00:06:41.797 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:41.798 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:41.798 13:36:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:41.798 13:36:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1422159 00:06:41.798 13:36:43 -- common/autotest_common.sh@926 -- # '[' -z 1422159 ']' 00:06:41.798 13:36:43 -- common/autotest_common.sh@930 -- # kill -0 1422159 00:06:41.798 13:36:43 -- common/autotest_common.sh@931 -- # uname 00:06:41.798 13:36:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.798 13:36:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1422159 00:06:41.798 13:36:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.798 13:36:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.798 13:36:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1422159' 00:06:41.798 killing process with pid 1422159 00:06:41.798 13:36:44 -- common/autotest_common.sh@945 -- # kill 1422159 00:06:41.798 13:36:44 -- common/autotest_common.sh@950 -- # wait 1422159 00:06:42.056 00:06:42.056 real 0m1.336s 00:06:42.056 user 0m1.405s 00:06:42.056 sys 0m0.372s 00:06:42.056 13:36:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.056 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:06:42.056 ************************************ 00:06:42.056 END TEST dpdk_mem_utility 00:06:42.056 ************************************ 00:06:42.056 13:36:44 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:42.056 13:36:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.056 13:36:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.056 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:06:42.056 ************************************ 00:06:42.056 START TEST event 00:06:42.056 ************************************ 00:06:42.056 13:36:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:42.056 * Looking for test storage... 00:06:42.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:42.056 13:36:44 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:42.056 13:36:44 -- bdev/nbd_common.sh@6 -- # set -e 00:06:42.056 13:36:44 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.056 13:36:44 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:42.056 13:36:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.056 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:06:42.056 ************************************ 00:06:42.056 START TEST event_perf 00:06:42.056 ************************************ 00:06:42.056 13:36:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.056 Running I/O for 1 seconds...[2024-07-11 13:36:44.471690] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:42.056 [2024-07-11 13:36:44.471767] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422450 ] 00:06:42.056 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.315 [2024-07-11 13:36:44.529800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.315 [2024-07-11 13:36:44.568776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.315 [2024-07-11 13:36:44.568875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.315 [2024-07-11 13:36:44.568957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.315 [2024-07-11 13:36:44.568959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.251 Running I/O for 1 seconds... 00:06:43.251 lcore 0: 207567 00:06:43.251 lcore 1: 207567 00:06:43.251 lcore 2: 207566 00:06:43.251 lcore 3: 207567 00:06:43.251 done. 00:06:43.251 00:06:43.251 real 0m1.178s 00:06:43.251 user 0m4.094s 00:06:43.251 sys 0m0.081s 00:06:43.251 13:36:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.251 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:06:43.251 ************************************ 00:06:43.251 END TEST event_perf 00:06:43.251 ************************************ 00:06:43.251 13:36:45 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.251 13:36:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:43.251 13:36:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.251 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:06:43.251 ************************************ 00:06:43.251 START TEST event_reactor 00:06:43.251 ************************************ 00:06:43.251 13:36:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.251 [2024-07-11 13:36:45.683275] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:43.251 [2024-07-11 13:36:45.683347] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422701 ] 00:06:43.509 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.509 [2024-07-11 13:36:45.740685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.509 [2024-07-11 13:36:45.777057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.447 test_start 00:06:44.447 oneshot 00:06:44.447 tick 100 00:06:44.447 tick 100 00:06:44.447 tick 250 00:06:44.447 tick 100 00:06:44.447 tick 100 00:06:44.447 tick 100 00:06:44.447 tick 250 00:06:44.447 tick 500 00:06:44.447 tick 100 00:06:44.447 tick 100 00:06:44.447 tick 250 00:06:44.447 tick 100 00:06:44.447 tick 100 00:06:44.447 test_end 00:06:44.447 00:06:44.447 real 0m1.177s 00:06:44.447 user 0m1.107s 00:06:44.447 sys 0m0.066s 00:06:44.447 13:36:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.447 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:44.447 ************************************ 00:06:44.447 END TEST event_reactor 00:06:44.447 ************************************ 00:06:44.447 13:36:46 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.447 13:36:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:44.448 13:36:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.448 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:44.448 ************************************ 00:06:44.448 START TEST event_reactor_perf 00:06:44.448 ************************************ 00:06:44.448 13:36:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.448 [2024-07-11 13:36:46.891696] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:44.448 [2024-07-11 13:36:46.891773] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422949 ] 00:06:44.706 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.706 [2024-07-11 13:36:46.948054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.706 [2024-07-11 13:36:46.984244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.643 test_start 00:06:45.643 test_end 00:06:45.643 Performance: 499372 events per second 00:06:45.643 00:06:45.643 real 0m1.169s 00:06:45.643 user 0m1.097s 00:06:45.643 sys 0m0.067s 00:06:45.643 13:36:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.643 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.643 ************************************ 00:06:45.643 END TEST event_reactor_perf 00:06:45.643 ************************************ 00:06:45.643 13:36:48 -- event/event.sh@49 -- # uname -s 00:06:45.643 13:36:48 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:45.643 13:36:48 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.643 13:36:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.643 13:36:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.643 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.643 ************************************ 00:06:45.643 START TEST event_scheduler 00:06:45.644 ************************************ 00:06:45.644 13:36:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.903 * Looking for test storage... 00:06:45.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:45.903 13:36:48 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:45.903 13:36:48 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1423226 00:06:45.903 13:36:48 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.903 13:36:48 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:45.903 13:36:48 -- scheduler/scheduler.sh@37 -- # waitforlisten 1423226 00:06:45.903 13:36:48 -- common/autotest_common.sh@819 -- # '[' -z 1423226 ']' 00:06:45.903 13:36:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.903 13:36:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.903 13:36:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.903 13:36:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.903 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.903 [2024-07-11 13:36:48.202376] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:45.903 [2024-07-11 13:36:48.202425] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423226 ] 00:06:45.903 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.903 [2024-07-11 13:36:48.253508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.903 [2024-07-11 13:36:48.293091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.903 [2024-07-11 13:36:48.293185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.903 [2024-07-11 13:36:48.293226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.903 [2024-07-11 13:36:48.293227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.904 13:36:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:45.904 13:36:48 -- common/autotest_common.sh@852 -- # return 0 00:06:45.904 13:36:48 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:45.904 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:45.904 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:45.904 POWER: Env isn't set yet! 00:06:45.904 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:45.904 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:45.904 POWER: Cannot set governor of lcore 0 to userspace 00:06:45.904 POWER: Attempting to initialise PSTAT power management... 00:06:45.904 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:45.904 POWER: Initialized successfully for lcore 0 power management 00:06:46.163 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:46.163 POWER: Initialized successfully for lcore 1 power management 00:06:46.163 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:46.163 POWER: Initialized successfully for lcore 2 power management 00:06:46.163 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:46.163 POWER: Initialized successfully for lcore 3 power management 00:06:46.163 [2024-07-11 13:36:48.381281] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:46.163 [2024-07-11 13:36:48.381295] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:46.163 [2024-07-11 13:36:48.381302] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:46.163 13:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.163 13:36:48 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:46.163 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.163 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.163 [2024-07-11 13:36:48.444812] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:46.163 13:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.163 13:36:48 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:46.163 13:36:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:46.163 13:36:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.163 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.163 ************************************ 00:06:46.163 START TEST scheduler_create_thread 00:06:46.163 ************************************ 00:06:46.164 13:36:48 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:46.164 13:36:48 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:46.164 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.164 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.164 2 00:06:46.164 13:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.164 13:36:48 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:46.164 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.164 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.164 3 00:06:46.164 13:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.164 13:36:48 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:46.164 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.164 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.164 4 00:06:46.164 13:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.164 13:36:48 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:46.164 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.164 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.164 5 00:06:46.164 13:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.164 13:36:48 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:46.164 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.164 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.164 6 00:06:46.164 13:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.164 13:36:48 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:46.164 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.164 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.164 7 00:06:46.164 13:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.164 13:36:48 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:46.164 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.164 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.164 8 00:06:46.164 13:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.164 13:36:48 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:46.164 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.164 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.164 9 00:06:46.164 13:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.164 13:36:48 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:46.164 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.164 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.164 10 00:06:46.164 13:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.164 13:36:48 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.164 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.164 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:46.732 13:36:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.732 13:36:48 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.732 13:36:48 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.732 13:36:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.732 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:47.669 13:36:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.670 13:36:49 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:47.670 13:36:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.670 13:36:49 -- common/autotest_common.sh@10 -- # set +x 00:06:48.607 13:36:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.607 13:36:50 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:48.607 13:36:50 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:48.607 13:36:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.607 13:36:50 -- common/autotest_common.sh@10 -- # set +x 00:06:49.544 13:36:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.544 00:06:49.544 real 0m3.229s 00:06:49.544 user 0m0.020s 00:06:49.544 sys 0m0.008s 00:06:49.544 13:36:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.544 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:06:49.544 ************************************ 00:06:49.544 END TEST scheduler_create_thread 00:06:49.544 ************************************ 00:06:49.544 13:36:51 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:49.544 13:36:51 -- scheduler/scheduler.sh@46 -- # killprocess 1423226 00:06:49.544 13:36:51 -- common/autotest_common.sh@926 -- # '[' -z 1423226 ']' 00:06:49.544 13:36:51 -- common/autotest_common.sh@930 -- # kill -0 1423226 00:06:49.544 13:36:51 -- common/autotest_common.sh@931 -- # uname 00:06:49.544 13:36:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:49.544 13:36:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1423226 00:06:49.544 13:36:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:49.544 13:36:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:49.544 13:36:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1423226' 00:06:49.544 killing process with pid 1423226 00:06:49.544 13:36:51 -- common/autotest_common.sh@945 -- # kill 1423226 00:06:49.545 13:36:51 -- common/autotest_common.sh@950 -- # wait 1423226 00:06:49.803 [2024-07-11 13:36:52.061894] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:49.803 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:49.803 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:49.803 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:49.803 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:49.803 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:49.803 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:49.803 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:49.804 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:50.063 00:06:50.063 real 0m4.223s 00:06:50.063 user 0m7.389s 00:06:50.063 sys 0m0.305s 00:06:50.063 13:36:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.063 13:36:52 -- common/autotest_common.sh@10 -- # set +x 00:06:50.063 ************************************ 00:06:50.063 END TEST event_scheduler 00:06:50.063 ************************************ 00:06:50.063 13:36:52 -- event/event.sh@51 -- # modprobe -n nbd 00:06:50.063 13:36:52 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:50.063 13:36:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:50.063 13:36:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.063 13:36:52 -- common/autotest_common.sh@10 -- # set +x 00:06:50.063 ************************************ 00:06:50.063 START TEST app_repeat 00:06:50.063 ************************************ 00:06:50.063 13:36:52 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:50.063 13:36:52 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.063 13:36:52 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.063 13:36:52 -- event/event.sh@13 -- # local nbd_list 00:06:50.063 13:36:52 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.063 13:36:52 -- event/event.sh@14 -- # local bdev_list 00:06:50.063 13:36:52 -- event/event.sh@15 -- # local repeat_times=4 00:06:50.063 13:36:52 -- event/event.sh@17 -- # modprobe nbd 00:06:50.063 13:36:52 -- event/event.sh@19 -- # repeat_pid=1423975 00:06:50.063 13:36:52 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.063 13:36:52 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1423975' 00:06:50.063 Process app_repeat pid: 1423975 00:06:50.063 13:36:52 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:50.063 13:36:52 -- event/event.sh@23 -- # for i in {0..2} 00:06:50.063 13:36:52 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:50.063 spdk_app_start Round 0 00:06:50.063 13:36:52 -- event/event.sh@25 -- # waitforlisten 1423975 /var/tmp/spdk-nbd.sock 00:06:50.063 13:36:52 -- common/autotest_common.sh@819 -- # '[' -z 1423975 ']' 00:06:50.063 13:36:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.063 13:36:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:50.063 13:36:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.063 13:36:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:50.063 13:36:52 -- common/autotest_common.sh@10 -- # set +x 00:06:50.063 [2024-07-11 13:36:52.365860] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:50.063 [2024-07-11 13:36:52.365915] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423975 ] 00:06:50.063 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.063 [2024-07-11 13:36:52.419685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.063 [2024-07-11 13:36:52.459941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.063 [2024-07-11 13:36:52.459944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.999 13:36:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.000 13:36:53 -- common/autotest_common.sh@852 -- # return 0 00:06:51.000 13:36:53 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.000 Malloc0 00:06:51.000 13:36:53 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.258 Malloc1 00:06:51.259 13:36:53 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@12 -- # local i 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.259 /dev/nbd0 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.259 13:36:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.259 13:36:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:51.259 13:36:53 -- common/autotest_common.sh@857 -- # local i 00:06:51.259 13:36:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:51.259 13:36:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:51.259 13:36:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:51.518 13:36:53 -- common/autotest_common.sh@861 -- # break 00:06:51.518 13:36:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:51.518 13:36:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:51.518 13:36:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.518 1+0 records in 00:06:51.518 1+0 records out 00:06:51.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000119749 s, 34.2 MB/s 00:06:51.518 13:36:53 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.518 13:36:53 -- common/autotest_common.sh@874 -- # size=4096 00:06:51.518 13:36:53 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.518 13:36:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:51.518 13:36:53 -- common/autotest_common.sh@877 -- # return 0 00:06:51.518 13:36:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.518 13:36:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.518 13:36:53 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.518 /dev/nbd1 00:06:51.518 13:36:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.518 13:36:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.518 13:36:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:51.518 13:36:53 -- common/autotest_common.sh@857 -- # local i 00:06:51.518 13:36:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:51.518 13:36:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:51.518 13:36:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:51.518 13:36:53 -- common/autotest_common.sh@861 -- # break 00:06:51.518 13:36:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:51.518 13:36:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:51.518 13:36:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.518 1+0 records in 00:06:51.518 1+0 records out 00:06:51.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224016 s, 18.3 MB/s 00:06:51.518 13:36:53 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.518 13:36:53 -- common/autotest_common.sh@874 -- # size=4096 00:06:51.518 13:36:53 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.518 13:36:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:51.518 13:36:53 -- common/autotest_common.sh@877 -- # return 0 00:06:51.518 13:36:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.518 13:36:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.518 13:36:53 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.518 13:36:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.518 13:36:53 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.777 { 00:06:51.777 "nbd_device": "/dev/nbd0", 00:06:51.777 "bdev_name": "Malloc0" 00:06:51.777 }, 00:06:51.777 { 00:06:51.777 "nbd_device": "/dev/nbd1", 00:06:51.777 "bdev_name": "Malloc1" 00:06:51.777 } 00:06:51.777 ]' 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.777 { 00:06:51.777 "nbd_device": "/dev/nbd0", 00:06:51.777 "bdev_name": "Malloc0" 00:06:51.777 }, 00:06:51.777 { 00:06:51.777 "nbd_device": "/dev/nbd1", 00:06:51.777 "bdev_name": "Malloc1" 00:06:51.777 } 00:06:51.777 ]' 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.777 /dev/nbd1' 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.777 /dev/nbd1' 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.777 256+0 records in 00:06:51.777 256+0 records out 00:06:51.777 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103273 s, 102 MB/s 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.777 13:36:54 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.777 256+0 records in 00:06:51.777 256+0 records out 00:06:51.777 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139895 s, 75.0 MB/s 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.778 256+0 records in 00:06:51.778 256+0 records out 00:06:51.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148635 s, 70.5 MB/s 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@51 -- # local i 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.778 13:36:54 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.036 13:36:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.036 13:36:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.036 13:36:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.036 13:36:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.036 13:36:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.036 13:36:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.036 13:36:54 -- bdev/nbd_common.sh@41 -- # break 00:06:52.036 13:36:54 -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.036 13:36:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.037 13:36:54 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@41 -- # break 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.295 13:36:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.554 13:36:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.554 13:36:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.554 13:36:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.554 13:36:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.554 13:36:54 -- bdev/nbd_common.sh@65 -- # true 00:06:52.554 13:36:54 -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.554 13:36:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.554 13:36:54 -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.554 13:36:54 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.554 13:36:54 -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.554 13:36:54 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.554 13:36:54 -- event/event.sh@35 -- # sleep 3 00:06:52.813 [2024-07-11 13:36:55.153486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.813 [2024-07-11 13:36:55.187261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.813 [2024-07-11 13:36:55.187266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.813 [2024-07-11 13:36:55.228704] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:52.813 [2024-07-11 13:36:55.228742] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.183 13:36:57 -- event/event.sh@23 -- # for i in {0..2} 00:06:56.183 13:36:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:56.183 spdk_app_start Round 1 00:06:56.183 13:36:57 -- event/event.sh@25 -- # waitforlisten 1423975 /var/tmp/spdk-nbd.sock 00:06:56.183 13:36:57 -- common/autotest_common.sh@819 -- # '[' -z 1423975 ']' 00:06:56.183 13:36:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.183 13:36:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:56.183 13:36:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.183 13:36:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:56.183 13:36:57 -- common/autotest_common.sh@10 -- # set +x 00:06:56.183 13:36:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:56.183 13:36:58 -- common/autotest_common.sh@852 -- # return 0 00:06:56.183 13:36:58 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.183 Malloc0 00:06:56.183 13:36:58 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.183 Malloc1 00:06:56.183 13:36:58 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@12 -- # local i 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.183 13:36:58 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.443 /dev/nbd0 00:06:56.443 13:36:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.443 13:36:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.443 13:36:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:56.443 13:36:58 -- common/autotest_common.sh@857 -- # local i 00:06:56.443 13:36:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:56.443 13:36:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:56.443 13:36:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:56.443 13:36:58 -- common/autotest_common.sh@861 -- # break 00:06:56.443 13:36:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:56.443 13:36:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:56.443 13:36:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.443 1+0 records in 00:06:56.443 1+0 records out 00:06:56.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179146 s, 22.9 MB/s 00:06:56.443 13:36:58 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.443 13:36:58 -- common/autotest_common.sh@874 -- # size=4096 00:06:56.443 13:36:58 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.443 13:36:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:56.443 13:36:58 -- common/autotest_common.sh@877 -- # return 0 00:06:56.443 13:36:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.443 13:36:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.443 13:36:58 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.443 /dev/nbd1 00:06:56.443 13:36:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.443 13:36:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.443 13:36:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:56.443 13:36:58 -- common/autotest_common.sh@857 -- # local i 00:06:56.443 13:36:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:56.443 13:36:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:56.443 13:36:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:56.443 13:36:58 -- common/autotest_common.sh@861 -- # break 00:06:56.443 13:36:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:56.443 13:36:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:56.443 13:36:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.443 1+0 records in 00:06:56.443 1+0 records out 00:06:56.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193151 s, 21.2 MB/s 00:06:56.443 13:36:58 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.443 13:36:58 -- common/autotest_common.sh@874 -- # size=4096 00:06:56.443 13:36:58 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.443 13:36:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:56.443 13:36:58 -- common/autotest_common.sh@877 -- # return 0 00:06:56.443 13:36:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.443 13:36:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.443 13:36:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.443 13:36:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.443 13:36:58 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.703 { 00:06:56.703 "nbd_device": "/dev/nbd0", 00:06:56.703 "bdev_name": "Malloc0" 00:06:56.703 }, 00:06:56.703 { 00:06:56.703 "nbd_device": "/dev/nbd1", 00:06:56.703 "bdev_name": "Malloc1" 00:06:56.703 } 00:06:56.703 ]' 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.703 { 00:06:56.703 "nbd_device": "/dev/nbd0", 00:06:56.703 "bdev_name": "Malloc0" 00:06:56.703 }, 00:06:56.703 { 00:06:56.703 "nbd_device": "/dev/nbd1", 00:06:56.703 "bdev_name": "Malloc1" 00:06:56.703 } 00:06:56.703 ]' 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.703 /dev/nbd1' 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.703 /dev/nbd1' 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.703 256+0 records in 00:06:56.703 256+0 records out 00:06:56.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102999 s, 102 MB/s 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.703 256+0 records in 00:06:56.703 256+0 records out 00:06:56.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138646 s, 75.6 MB/s 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.703 13:36:59 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.963 256+0 records in 00:06:56.963 256+0 records out 00:06:56.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157506 s, 66.6 MB/s 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@51 -- # local i 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@41 -- # break 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.963 13:36:59 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.223 13:36:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.223 13:36:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.223 13:36:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.223 13:36:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.223 13:36:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.223 13:36:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.223 13:36:59 -- bdev/nbd_common.sh@41 -- # break 00:06:57.223 13:36:59 -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.223 13:36:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.223 13:36:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.223 13:36:59 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.482 13:36:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.482 13:36:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.482 13:36:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.482 13:36:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.482 13:36:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.482 13:36:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.482 13:36:59 -- bdev/nbd_common.sh@65 -- # true 00:06:57.482 13:36:59 -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.482 13:36:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.482 13:36:59 -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.482 13:36:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.482 13:36:59 -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.482 13:36:59 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:57.742 13:36:59 -- event/event.sh@35 -- # sleep 3 00:06:57.742 [2024-07-11 13:37:00.142424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.742 [2024-07-11 13:37:00.179392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.742 [2024-07-11 13:37:00.179394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.001 [2024-07-11 13:37:00.221278] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.001 [2024-07-11 13:37:00.221318] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:00.537 13:37:02 -- event/event.sh@23 -- # for i in {0..2} 00:07:00.537 13:37:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:00.537 spdk_app_start Round 2 00:07:00.537 13:37:02 -- event/event.sh@25 -- # waitforlisten 1423975 /var/tmp/spdk-nbd.sock 00:07:00.537 13:37:02 -- common/autotest_common.sh@819 -- # '[' -z 1423975 ']' 00:07:00.537 13:37:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.538 13:37:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:00.538 13:37:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.538 13:37:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:00.538 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:07:00.796 13:37:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:00.796 13:37:03 -- common/autotest_common.sh@852 -- # return 0 00:07:00.796 13:37:03 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.054 Malloc0 00:07:01.054 13:37:03 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.054 Malloc1 00:07:01.054 13:37:03 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@12 -- # local i 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.054 13:37:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:01.311 /dev/nbd0 00:07:01.311 13:37:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.311 13:37:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.311 13:37:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:01.311 13:37:03 -- common/autotest_common.sh@857 -- # local i 00:07:01.311 13:37:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:01.311 13:37:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:01.311 13:37:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:01.311 13:37:03 -- common/autotest_common.sh@861 -- # break 00:07:01.311 13:37:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:01.311 13:37:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:01.311 13:37:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.311 1+0 records in 00:07:01.311 1+0 records out 00:07:01.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022486 s, 18.2 MB/s 00:07:01.311 13:37:03 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.311 13:37:03 -- common/autotest_common.sh@874 -- # size=4096 00:07:01.311 13:37:03 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.311 13:37:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:01.311 13:37:03 -- common/autotest_common.sh@877 -- # return 0 00:07:01.311 13:37:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.311 13:37:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.311 13:37:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:01.568 /dev/nbd1 00:07:01.568 13:37:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.568 13:37:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.568 13:37:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:01.568 13:37:03 -- common/autotest_common.sh@857 -- # local i 00:07:01.568 13:37:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:01.568 13:37:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:01.568 13:37:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:01.568 13:37:03 -- common/autotest_common.sh@861 -- # break 00:07:01.568 13:37:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:01.568 13:37:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:01.568 13:37:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.568 1+0 records in 00:07:01.568 1+0 records out 00:07:01.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180165 s, 22.7 MB/s 00:07:01.568 13:37:03 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.568 13:37:03 -- common/autotest_common.sh@874 -- # size=4096 00:07:01.568 13:37:03 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.568 13:37:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:01.568 13:37:03 -- common/autotest_common.sh@877 -- # return 0 00:07:01.568 13:37:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.568 13:37:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.568 13:37:03 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.568 13:37:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.568 13:37:03 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.827 { 00:07:01.827 "nbd_device": "/dev/nbd0", 00:07:01.827 "bdev_name": "Malloc0" 00:07:01.827 }, 00:07:01.827 { 00:07:01.827 "nbd_device": "/dev/nbd1", 00:07:01.827 "bdev_name": "Malloc1" 00:07:01.827 } 00:07:01.827 ]' 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.827 { 00:07:01.827 "nbd_device": "/dev/nbd0", 00:07:01.827 "bdev_name": "Malloc0" 00:07:01.827 }, 00:07:01.827 { 00:07:01.827 "nbd_device": "/dev/nbd1", 00:07:01.827 "bdev_name": "Malloc1" 00:07:01.827 } 00:07:01.827 ]' 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:01.827 /dev/nbd1' 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:01.827 /dev/nbd1' 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@65 -- # count=2 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@95 -- # count=2 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:01.827 256+0 records in 00:07:01.827 256+0 records out 00:07:01.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103925 s, 101 MB/s 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:01.827 256+0 records in 00:07:01.827 256+0 records out 00:07:01.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138768 s, 75.6 MB/s 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:01.827 256+0 records in 00:07:01.827 256+0 records out 00:07:01.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145325 s, 72.2 MB/s 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@51 -- # local i 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.827 13:37:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@41 -- # break 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@41 -- # break 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.085 13:37:04 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.343 13:37:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.343 13:37:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.343 13:37:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.343 13:37:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.343 13:37:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.343 13:37:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.343 13:37:04 -- bdev/nbd_common.sh@65 -- # true 00:07:02.343 13:37:04 -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.343 13:37:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.343 13:37:04 -- bdev/nbd_common.sh@104 -- # count=0 00:07:02.343 13:37:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:02.343 13:37:04 -- bdev/nbd_common.sh@109 -- # return 0 00:07:02.343 13:37:04 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:02.602 13:37:04 -- event/event.sh@35 -- # sleep 3 00:07:02.861 [2024-07-11 13:37:05.092677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.861 [2024-07-11 13:37:05.126707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.861 [2024-07-11 13:37:05.126711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.861 [2024-07-11 13:37:05.168088] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.861 [2024-07-11 13:37:05.168127] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.150 13:37:07 -- event/event.sh@38 -- # waitforlisten 1423975 /var/tmp/spdk-nbd.sock 00:07:06.150 13:37:07 -- common/autotest_common.sh@819 -- # '[' -z 1423975 ']' 00:07:06.150 13:37:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.150 13:37:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:06.150 13:37:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.150 13:37:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:06.150 13:37:07 -- common/autotest_common.sh@10 -- # set +x 00:07:06.150 13:37:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:06.150 13:37:08 -- common/autotest_common.sh@852 -- # return 0 00:07:06.150 13:37:08 -- event/event.sh@39 -- # killprocess 1423975 00:07:06.150 13:37:08 -- common/autotest_common.sh@926 -- # '[' -z 1423975 ']' 00:07:06.150 13:37:08 -- common/autotest_common.sh@930 -- # kill -0 1423975 00:07:06.150 13:37:08 -- common/autotest_common.sh@931 -- # uname 00:07:06.150 13:37:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:06.150 13:37:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1423975 00:07:06.150 13:37:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:06.150 13:37:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:06.150 13:37:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1423975' 00:07:06.150 killing process with pid 1423975 00:07:06.150 13:37:08 -- common/autotest_common.sh@945 -- # kill 1423975 00:07:06.150 13:37:08 -- common/autotest_common.sh@950 -- # wait 1423975 00:07:06.150 spdk_app_start is called in Round 0. 00:07:06.150 Shutdown signal received, stop current app iteration 00:07:06.150 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:07:06.150 spdk_app_start is called in Round 1. 00:07:06.150 Shutdown signal received, stop current app iteration 00:07:06.150 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:07:06.150 spdk_app_start is called in Round 2. 00:07:06.150 Shutdown signal received, stop current app iteration 00:07:06.150 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:07:06.150 spdk_app_start is called in Round 3. 00:07:06.150 Shutdown signal received, stop current app iteration 00:07:06.150 13:37:08 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:06.150 13:37:08 -- event/event.sh@42 -- # return 0 00:07:06.150 00:07:06.150 real 0m15.962s 00:07:06.150 user 0m34.618s 00:07:06.150 sys 0m2.278s 00:07:06.150 13:37:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.150 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:07:06.150 ************************************ 00:07:06.150 END TEST app_repeat 00:07:06.150 ************************************ 00:07:06.150 13:37:08 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:06.150 13:37:08 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:06.150 13:37:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:06.150 13:37:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.150 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:07:06.150 ************************************ 00:07:06.150 START TEST cpu_locks 00:07:06.150 ************************************ 00:07:06.150 13:37:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:06.150 * Looking for test storage... 00:07:06.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:06.151 13:37:08 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:06.151 13:37:08 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:06.151 13:37:08 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:06.151 13:37:08 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:06.151 13:37:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:06.151 13:37:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.151 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:07:06.151 ************************************ 00:07:06.151 START TEST default_locks 00:07:06.151 ************************************ 00:07:06.151 13:37:08 -- common/autotest_common.sh@1104 -- # default_locks 00:07:06.151 13:37:08 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.151 13:37:08 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1427004 00:07:06.151 13:37:08 -- event/cpu_locks.sh@47 -- # waitforlisten 1427004 00:07:06.151 13:37:08 -- common/autotest_common.sh@819 -- # '[' -z 1427004 ']' 00:07:06.151 13:37:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.151 13:37:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:06.151 13:37:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.151 13:37:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:06.151 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:07:06.151 [2024-07-11 13:37:08.471566] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:06.151 [2024-07-11 13:37:08.471616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427004 ] 00:07:06.151 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.151 [2024-07-11 13:37:08.525470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.151 [2024-07-11 13:37:08.564245] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:06.151 [2024-07-11 13:37:08.564357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.089 13:37:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:07.089 13:37:09 -- common/autotest_common.sh@852 -- # return 0 00:07:07.090 13:37:09 -- event/cpu_locks.sh@49 -- # locks_exist 1427004 00:07:07.090 13:37:09 -- event/cpu_locks.sh@22 -- # lslocks -p 1427004 00:07:07.090 13:37:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.090 lslocks: write error 00:07:07.090 13:37:09 -- event/cpu_locks.sh@50 -- # killprocess 1427004 00:07:07.090 13:37:09 -- common/autotest_common.sh@926 -- # '[' -z 1427004 ']' 00:07:07.090 13:37:09 -- common/autotest_common.sh@930 -- # kill -0 1427004 00:07:07.090 13:37:09 -- common/autotest_common.sh@931 -- # uname 00:07:07.090 13:37:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:07.090 13:37:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1427004 00:07:07.090 13:37:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:07.090 13:37:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:07.090 13:37:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1427004' 00:07:07.090 killing process with pid 1427004 00:07:07.090 13:37:09 -- common/autotest_common.sh@945 -- # kill 1427004 00:07:07.090 13:37:09 -- common/autotest_common.sh@950 -- # wait 1427004 00:07:07.658 13:37:09 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1427004 00:07:07.658 13:37:09 -- common/autotest_common.sh@640 -- # local es=0 00:07:07.658 13:37:09 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1427004 00:07:07.658 13:37:09 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:07.658 13:37:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.658 13:37:09 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:07.658 13:37:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.658 13:37:09 -- common/autotest_common.sh@643 -- # waitforlisten 1427004 00:07:07.658 13:37:09 -- common/autotest_common.sh@819 -- # '[' -z 1427004 ']' 00:07:07.658 13:37:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.658 13:37:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:07.658 13:37:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.658 13:37:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:07.658 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:07:07.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1427004) - No such process 00:07:07.658 ERROR: process (pid: 1427004) is no longer running 00:07:07.658 13:37:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:07.658 13:37:09 -- common/autotest_common.sh@852 -- # return 1 00:07:07.658 13:37:09 -- common/autotest_common.sh@643 -- # es=1 00:07:07.658 13:37:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:07.658 13:37:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:07.658 13:37:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:07.658 13:37:09 -- event/cpu_locks.sh@54 -- # no_locks 00:07:07.658 13:37:09 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:07.658 13:37:09 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:07.658 13:37:09 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:07.658 00:07:07.658 real 0m1.395s 00:07:07.658 user 0m1.477s 00:07:07.658 sys 0m0.429s 00:07:07.658 13:37:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.658 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:07:07.658 ************************************ 00:07:07.658 END TEST default_locks 00:07:07.658 ************************************ 00:07:07.658 13:37:09 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:07.658 13:37:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:07.658 13:37:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.658 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:07:07.658 ************************************ 00:07:07.658 START TEST default_locks_via_rpc 00:07:07.658 ************************************ 00:07:07.658 13:37:09 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:07:07.658 13:37:09 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1427272 00:07:07.658 13:37:09 -- event/cpu_locks.sh@63 -- # waitforlisten 1427272 00:07:07.658 13:37:09 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.658 13:37:09 -- common/autotest_common.sh@819 -- # '[' -z 1427272 ']' 00:07:07.658 13:37:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.658 13:37:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:07.658 13:37:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.658 13:37:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:07.658 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:07:07.658 [2024-07-11 13:37:09.920215] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:07.658 [2024-07-11 13:37:09.920264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427272 ] 00:07:07.658 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.658 [2024-07-11 13:37:09.973275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.658 [2024-07-11 13:37:10.012751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.658 [2024-07-11 13:37:10.012868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.595 13:37:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:08.595 13:37:10 -- common/autotest_common.sh@852 -- # return 0 00:07:08.595 13:37:10 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:08.595 13:37:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.595 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.595 13:37:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.595 13:37:10 -- event/cpu_locks.sh@67 -- # no_locks 00:07:08.595 13:37:10 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:08.595 13:37:10 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:08.595 13:37:10 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:08.595 13:37:10 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:08.595 13:37:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:08.595 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.595 13:37:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:08.595 13:37:10 -- event/cpu_locks.sh@71 -- # locks_exist 1427272 00:07:08.595 13:37:10 -- event/cpu_locks.sh@22 -- # lslocks -p 1427272 00:07:08.595 13:37:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.853 13:37:11 -- event/cpu_locks.sh@73 -- # killprocess 1427272 00:07:08.854 13:37:11 -- common/autotest_common.sh@926 -- # '[' -z 1427272 ']' 00:07:08.854 13:37:11 -- common/autotest_common.sh@930 -- # kill -0 1427272 00:07:08.854 13:37:11 -- common/autotest_common.sh@931 -- # uname 00:07:08.854 13:37:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:08.854 13:37:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1427272 00:07:09.112 13:37:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:09.112 13:37:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:09.112 13:37:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1427272' 00:07:09.112 killing process with pid 1427272 00:07:09.112 13:37:11 -- common/autotest_common.sh@945 -- # kill 1427272 00:07:09.112 13:37:11 -- common/autotest_common.sh@950 -- # wait 1427272 00:07:09.371 00:07:09.371 real 0m1.749s 00:07:09.371 user 0m1.851s 00:07:09.371 sys 0m0.546s 00:07:09.371 13:37:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.371 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:07:09.371 ************************************ 00:07:09.371 END TEST default_locks_via_rpc 00:07:09.371 ************************************ 00:07:09.371 13:37:11 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:09.371 13:37:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:09.371 13:37:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.371 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:07:09.371 ************************************ 00:07:09.371 START TEST non_locking_app_on_locked_coremask 00:07:09.371 ************************************ 00:07:09.371 13:37:11 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:07:09.371 13:37:11 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1427538 00:07:09.371 13:37:11 -- event/cpu_locks.sh@81 -- # waitforlisten 1427538 /var/tmp/spdk.sock 00:07:09.371 13:37:11 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.371 13:37:11 -- common/autotest_common.sh@819 -- # '[' -z 1427538 ']' 00:07:09.371 13:37:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.371 13:37:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:09.371 13:37:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.371 13:37:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:09.371 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:07:09.371 [2024-07-11 13:37:11.699648] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:09.371 [2024-07-11 13:37:11.699695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427538 ] 00:07:09.371 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.371 [2024-07-11 13:37:11.752086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.371 [2024-07-11 13:37:11.790808] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.371 [2024-07-11 13:37:11.790928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.308 13:37:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:10.308 13:37:12 -- common/autotest_common.sh@852 -- # return 0 00:07:10.308 13:37:12 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:10.308 13:37:12 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1427767 00:07:10.308 13:37:12 -- event/cpu_locks.sh@85 -- # waitforlisten 1427767 /var/tmp/spdk2.sock 00:07:10.308 13:37:12 -- common/autotest_common.sh@819 -- # '[' -z 1427767 ']' 00:07:10.308 13:37:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.308 13:37:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:10.308 13:37:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.308 13:37:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:10.308 13:37:12 -- common/autotest_common.sh@10 -- # set +x 00:07:10.308 [2024-07-11 13:37:12.518391] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:10.308 [2024-07-11 13:37:12.518439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1427767 ] 00:07:10.308 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.308 [2024-07-11 13:37:12.588461] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.308 [2024-07-11 13:37:12.588485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.308 [2024-07-11 13:37:12.665877] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.308 [2024-07-11 13:37:12.665994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.875 13:37:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:10.875 13:37:13 -- common/autotest_common.sh@852 -- # return 0 00:07:10.875 13:37:13 -- event/cpu_locks.sh@87 -- # locks_exist 1427538 00:07:10.875 13:37:13 -- event/cpu_locks.sh@22 -- # lslocks -p 1427538 00:07:10.875 13:37:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.442 lslocks: write error 00:07:11.442 13:37:13 -- event/cpu_locks.sh@89 -- # killprocess 1427538 00:07:11.442 13:37:13 -- common/autotest_common.sh@926 -- # '[' -z 1427538 ']' 00:07:11.442 13:37:13 -- common/autotest_common.sh@930 -- # kill -0 1427538 00:07:11.442 13:37:13 -- common/autotest_common.sh@931 -- # uname 00:07:11.442 13:37:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:11.442 13:37:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1427538 00:07:11.442 13:37:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:11.442 13:37:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:11.442 13:37:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1427538' 00:07:11.442 killing process with pid 1427538 00:07:11.442 13:37:13 -- common/autotest_common.sh@945 -- # kill 1427538 00:07:11.442 13:37:13 -- common/autotest_common.sh@950 -- # wait 1427538 00:07:12.020 13:37:14 -- event/cpu_locks.sh@90 -- # killprocess 1427767 00:07:12.020 13:37:14 -- common/autotest_common.sh@926 -- # '[' -z 1427767 ']' 00:07:12.020 13:37:14 -- common/autotest_common.sh@930 -- # kill -0 1427767 00:07:12.020 13:37:14 -- common/autotest_common.sh@931 -- # uname 00:07:12.020 13:37:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:12.020 13:37:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1427767 00:07:12.020 13:37:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:12.020 13:37:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:12.020 13:37:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1427767' 00:07:12.020 killing process with pid 1427767 00:07:12.020 13:37:14 -- common/autotest_common.sh@945 -- # kill 1427767 00:07:12.020 13:37:14 -- common/autotest_common.sh@950 -- # wait 1427767 00:07:12.306 00:07:12.306 real 0m3.070s 00:07:12.306 user 0m3.274s 00:07:12.306 sys 0m0.868s 00:07:12.306 13:37:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.306 13:37:14 -- common/autotest_common.sh@10 -- # set +x 00:07:12.306 ************************************ 00:07:12.306 END TEST non_locking_app_on_locked_coremask 00:07:12.306 ************************************ 00:07:12.306 13:37:14 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:12.306 13:37:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.306 13:37:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.306 13:37:14 -- common/autotest_common.sh@10 -- # set +x 00:07:12.306 ************************************ 00:07:12.306 START TEST locking_app_on_unlocked_coremask 00:07:12.306 ************************************ 00:07:12.306 13:37:14 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:07:12.306 13:37:14 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1428048 00:07:12.306 13:37:14 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:12.306 13:37:14 -- event/cpu_locks.sh@99 -- # waitforlisten 1428048 /var/tmp/spdk.sock 00:07:12.306 13:37:14 -- common/autotest_common.sh@819 -- # '[' -z 1428048 ']' 00:07:12.564 13:37:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.564 13:37:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:12.564 13:37:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.564 13:37:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:12.564 13:37:14 -- common/autotest_common.sh@10 -- # set +x 00:07:12.564 [2024-07-11 13:37:14.806271] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:12.564 [2024-07-11 13:37:14.806319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428048 ] 00:07:12.564 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.564 [2024-07-11 13:37:14.860521] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:12.564 [2024-07-11 13:37:14.860550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.564 [2024-07-11 13:37:14.899358] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:12.564 [2024-07-11 13:37:14.899480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.500 13:37:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:13.500 13:37:15 -- common/autotest_common.sh@852 -- # return 0 00:07:13.500 13:37:15 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1428282 00:07:13.500 13:37:15 -- event/cpu_locks.sh@103 -- # waitforlisten 1428282 /var/tmp/spdk2.sock 00:07:13.500 13:37:15 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:13.500 13:37:15 -- common/autotest_common.sh@819 -- # '[' -z 1428282 ']' 00:07:13.500 13:37:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.500 13:37:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:13.500 13:37:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.500 13:37:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:13.500 13:37:15 -- common/autotest_common.sh@10 -- # set +x 00:07:13.500 [2024-07-11 13:37:15.649493] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:13.500 [2024-07-11 13:37:15.649541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428282 ] 00:07:13.500 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.500 [2024-07-11 13:37:15.720809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.500 [2024-07-11 13:37:15.799191] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:13.500 [2024-07-11 13:37:15.799304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.067 13:37:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:14.067 13:37:16 -- common/autotest_common.sh@852 -- # return 0 00:07:14.067 13:37:16 -- event/cpu_locks.sh@105 -- # locks_exist 1428282 00:07:14.067 13:37:16 -- event/cpu_locks.sh@22 -- # lslocks -p 1428282 00:07:14.067 13:37:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.326 lslocks: write error 00:07:14.326 13:37:16 -- event/cpu_locks.sh@107 -- # killprocess 1428048 00:07:14.326 13:37:16 -- common/autotest_common.sh@926 -- # '[' -z 1428048 ']' 00:07:14.326 13:37:16 -- common/autotest_common.sh@930 -- # kill -0 1428048 00:07:14.326 13:37:16 -- common/autotest_common.sh@931 -- # uname 00:07:14.326 13:37:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:14.326 13:37:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1428048 00:07:14.326 13:37:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:14.326 13:37:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:14.326 13:37:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1428048' 00:07:14.326 killing process with pid 1428048 00:07:14.326 13:37:16 -- common/autotest_common.sh@945 -- # kill 1428048 00:07:14.326 13:37:16 -- common/autotest_common.sh@950 -- # wait 1428048 00:07:14.893 13:37:17 -- event/cpu_locks.sh@108 -- # killprocess 1428282 00:07:14.893 13:37:17 -- common/autotest_common.sh@926 -- # '[' -z 1428282 ']' 00:07:14.893 13:37:17 -- common/autotest_common.sh@930 -- # kill -0 1428282 00:07:14.893 13:37:17 -- common/autotest_common.sh@931 -- # uname 00:07:14.893 13:37:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:14.893 13:37:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1428282 00:07:14.893 13:37:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:14.893 13:37:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:14.893 13:37:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1428282' 00:07:14.893 killing process with pid 1428282 00:07:14.893 13:37:17 -- common/autotest_common.sh@945 -- # kill 1428282 00:07:14.893 13:37:17 -- common/autotest_common.sh@950 -- # wait 1428282 00:07:15.459 00:07:15.459 real 0m2.878s 00:07:15.459 user 0m3.077s 00:07:15.459 sys 0m0.804s 00:07:15.459 13:37:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.459 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:07:15.459 ************************************ 00:07:15.459 END TEST locking_app_on_unlocked_coremask 00:07:15.459 ************************************ 00:07:15.459 13:37:17 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:15.459 13:37:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.459 13:37:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.459 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:07:15.459 ************************************ 00:07:15.459 START TEST locking_app_on_locked_coremask 00:07:15.459 ************************************ 00:07:15.459 13:37:17 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:07:15.459 13:37:17 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1428630 00:07:15.459 13:37:17 -- event/cpu_locks.sh@116 -- # waitforlisten 1428630 /var/tmp/spdk.sock 00:07:15.459 13:37:17 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.459 13:37:17 -- common/autotest_common.sh@819 -- # '[' -z 1428630 ']' 00:07:15.459 13:37:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.459 13:37:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:15.459 13:37:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.459 13:37:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:15.459 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:07:15.459 [2024-07-11 13:37:17.727304] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:15.459 [2024-07-11 13:37:17.727354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428630 ] 00:07:15.459 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.459 [2024-07-11 13:37:17.784330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.459 [2024-07-11 13:37:17.821121] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:15.459 [2024-07-11 13:37:17.821257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.391 13:37:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:16.391 13:37:18 -- common/autotest_common.sh@852 -- # return 0 00:07:16.391 13:37:18 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1428790 00:07:16.391 13:37:18 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1428790 /var/tmp/spdk2.sock 00:07:16.391 13:37:18 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:16.391 13:37:18 -- common/autotest_common.sh@640 -- # local es=0 00:07:16.391 13:37:18 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1428790 /var/tmp/spdk2.sock 00:07:16.391 13:37:18 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:16.391 13:37:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:16.391 13:37:18 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:16.391 13:37:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:16.391 13:37:18 -- common/autotest_common.sh@643 -- # waitforlisten 1428790 /var/tmp/spdk2.sock 00:07:16.391 13:37:18 -- common/autotest_common.sh@819 -- # '[' -z 1428790 ']' 00:07:16.391 13:37:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.391 13:37:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:16.391 13:37:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.391 13:37:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:16.391 13:37:18 -- common/autotest_common.sh@10 -- # set +x 00:07:16.391 [2024-07-11 13:37:18.560153] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:16.391 [2024-07-11 13:37:18.560204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428790 ] 00:07:16.391 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.391 [2024-07-11 13:37:18.636902] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1428630 has claimed it. 00:07:16.391 [2024-07-11 13:37:18.636942] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:16.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1428790) - No such process 00:07:16.970 ERROR: process (pid: 1428790) is no longer running 00:07:16.970 13:37:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:16.970 13:37:19 -- common/autotest_common.sh@852 -- # return 1 00:07:16.970 13:37:19 -- common/autotest_common.sh@643 -- # es=1 00:07:16.970 13:37:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:16.970 13:37:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:16.970 13:37:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:16.970 13:37:19 -- event/cpu_locks.sh@122 -- # locks_exist 1428630 00:07:16.970 13:37:19 -- event/cpu_locks.sh@22 -- # lslocks -p 1428630 00:07:16.970 13:37:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.228 lslocks: write error 00:07:17.228 13:37:19 -- event/cpu_locks.sh@124 -- # killprocess 1428630 00:07:17.228 13:37:19 -- common/autotest_common.sh@926 -- # '[' -z 1428630 ']' 00:07:17.228 13:37:19 -- common/autotest_common.sh@930 -- # kill -0 1428630 00:07:17.228 13:37:19 -- common/autotest_common.sh@931 -- # uname 00:07:17.228 13:37:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:17.228 13:37:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1428630 00:07:17.228 13:37:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:17.228 13:37:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:17.228 13:37:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1428630' 00:07:17.228 killing process with pid 1428630 00:07:17.228 13:37:19 -- common/autotest_common.sh@945 -- # kill 1428630 00:07:17.228 13:37:19 -- common/autotest_common.sh@950 -- # wait 1428630 00:07:17.487 00:07:17.487 real 0m2.133s 00:07:17.487 user 0m2.337s 00:07:17.487 sys 0m0.579s 00:07:17.487 13:37:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.487 13:37:19 -- common/autotest_common.sh@10 -- # set +x 00:07:17.487 ************************************ 00:07:17.487 END TEST locking_app_on_locked_coremask 00:07:17.487 ************************************ 00:07:17.487 13:37:19 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:17.487 13:37:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.487 13:37:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.487 13:37:19 -- common/autotest_common.sh@10 -- # set +x 00:07:17.487 ************************************ 00:07:17.487 START TEST locking_overlapped_coremask 00:07:17.487 ************************************ 00:07:17.487 13:37:19 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:07:17.487 13:37:19 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1429048 00:07:17.487 13:37:19 -- event/cpu_locks.sh@133 -- # waitforlisten 1429048 /var/tmp/spdk.sock 00:07:17.487 13:37:19 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:17.487 13:37:19 -- common/autotest_common.sh@819 -- # '[' -z 1429048 ']' 00:07:17.487 13:37:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.487 13:37:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:17.487 13:37:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.487 13:37:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:17.487 13:37:19 -- common/autotest_common.sh@10 -- # set +x 00:07:17.487 [2024-07-11 13:37:19.896268] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:17.487 [2024-07-11 13:37:19.896316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429048 ] 00:07:17.487 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.745 [2024-07-11 13:37:19.949902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.745 [2024-07-11 13:37:19.985513] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:17.745 [2024-07-11 13:37:19.985706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.745 [2024-07-11 13:37:19.985805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.745 [2024-07-11 13:37:19.985805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.310 13:37:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.310 13:37:20 -- common/autotest_common.sh@852 -- # return 0 00:07:18.310 13:37:20 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:18.310 13:37:20 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1429286 00:07:18.310 13:37:20 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1429286 /var/tmp/spdk2.sock 00:07:18.310 13:37:20 -- common/autotest_common.sh@640 -- # local es=0 00:07:18.310 13:37:20 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1429286 /var/tmp/spdk2.sock 00:07:18.310 13:37:20 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:18.310 13:37:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:18.310 13:37:20 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:18.310 13:37:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:18.310 13:37:20 -- common/autotest_common.sh@643 -- # waitforlisten 1429286 /var/tmp/spdk2.sock 00:07:18.310 13:37:20 -- common/autotest_common.sh@819 -- # '[' -z 1429286 ']' 00:07:18.310 13:37:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.310 13:37:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.310 13:37:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.310 13:37:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.310 13:37:20 -- common/autotest_common.sh@10 -- # set +x 00:07:18.310 [2024-07-11 13:37:20.736399] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:18.310 [2024-07-11 13:37:20.736448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429286 ] 00:07:18.310 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.568 [2024-07-11 13:37:20.814581] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1429048 has claimed it. 00:07:18.568 [2024-07-11 13:37:20.814623] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1429286) - No such process 00:07:19.137 ERROR: process (pid: 1429286) is no longer running 00:07:19.137 13:37:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:19.137 13:37:21 -- common/autotest_common.sh@852 -- # return 1 00:07:19.137 13:37:21 -- common/autotest_common.sh@643 -- # es=1 00:07:19.137 13:37:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:19.137 13:37:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:19.137 13:37:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:19.137 13:37:21 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:19.137 13:37:21 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.137 13:37:21 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.137 13:37:21 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.137 13:37:21 -- event/cpu_locks.sh@141 -- # killprocess 1429048 00:07:19.137 13:37:21 -- common/autotest_common.sh@926 -- # '[' -z 1429048 ']' 00:07:19.137 13:37:21 -- common/autotest_common.sh@930 -- # kill -0 1429048 00:07:19.137 13:37:21 -- common/autotest_common.sh@931 -- # uname 00:07:19.137 13:37:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:19.137 13:37:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1429048 00:07:19.137 13:37:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:19.137 13:37:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:19.137 13:37:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1429048' 00:07:19.137 killing process with pid 1429048 00:07:19.137 13:37:21 -- common/autotest_common.sh@945 -- # kill 1429048 00:07:19.137 13:37:21 -- common/autotest_common.sh@950 -- # wait 1429048 00:07:19.395 00:07:19.395 real 0m1.857s 00:07:19.395 user 0m5.336s 00:07:19.395 sys 0m0.388s 00:07:19.395 13:37:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.395 13:37:21 -- common/autotest_common.sh@10 -- # set +x 00:07:19.395 ************************************ 00:07:19.395 END TEST locking_overlapped_coremask 00:07:19.395 ************************************ 00:07:19.395 13:37:21 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:19.395 13:37:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.395 13:37:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.395 13:37:21 -- common/autotest_common.sh@10 -- # set +x 00:07:19.395 ************************************ 00:07:19.395 START TEST locking_overlapped_coremask_via_rpc 00:07:19.395 ************************************ 00:07:19.395 13:37:21 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:07:19.395 13:37:21 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1429399 00:07:19.395 13:37:21 -- event/cpu_locks.sh@149 -- # waitforlisten 1429399 /var/tmp/spdk.sock 00:07:19.395 13:37:21 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:19.395 13:37:21 -- common/autotest_common.sh@819 -- # '[' -z 1429399 ']' 00:07:19.395 13:37:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.395 13:37:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:19.395 13:37:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.395 13:37:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:19.395 13:37:21 -- common/autotest_common.sh@10 -- # set +x 00:07:19.395 [2024-07-11 13:37:21.793614] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:19.395 [2024-07-11 13:37:21.793663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429399 ] 00:07:19.395 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.395 [2024-07-11 13:37:21.845113] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.395 [2024-07-11 13:37:21.845141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.654 [2024-07-11 13:37:21.883903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:19.654 [2024-07-11 13:37:21.884046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.654 [2024-07-11 13:37:21.884146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.654 [2024-07-11 13:37:21.884149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.221 13:37:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:20.221 13:37:22 -- common/autotest_common.sh@852 -- # return 0 00:07:20.221 13:37:22 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1429562 00:07:20.221 13:37:22 -- event/cpu_locks.sh@153 -- # waitforlisten 1429562 /var/tmp/spdk2.sock 00:07:20.221 13:37:22 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:20.221 13:37:22 -- common/autotest_common.sh@819 -- # '[' -z 1429562 ']' 00:07:20.221 13:37:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.221 13:37:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:20.221 13:37:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.221 13:37:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:20.221 13:37:22 -- common/autotest_common.sh@10 -- # set +x 00:07:20.221 [2024-07-11 13:37:22.656285] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:20.222 [2024-07-11 13:37:22.656336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429562 ] 00:07:20.480 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.480 [2024-07-11 13:37:22.732174] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.480 [2024-07-11 13:37:22.732202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.480 [2024-07-11 13:37:22.810616] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:20.480 [2024-07-11 13:37:22.810781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.480 [2024-07-11 13:37:22.814205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.480 [2024-07-11 13:37:22.814206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.048 13:37:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:21.048 13:37:23 -- common/autotest_common.sh@852 -- # return 0 00:07:21.048 13:37:23 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:21.048 13:37:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:21.048 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:07:21.048 13:37:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:21.048 13:37:23 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.048 13:37:23 -- common/autotest_common.sh@640 -- # local es=0 00:07:21.048 13:37:23 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.048 13:37:23 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:07:21.048 13:37:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:21.048 13:37:23 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:07:21.048 13:37:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:21.048 13:37:23 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.048 13:37:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:21.048 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:07:21.048 [2024-07-11 13:37:23.475224] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1429399 has claimed it. 00:07:21.048 request: 00:07:21.048 { 00:07:21.048 "method": "framework_enable_cpumask_locks", 00:07:21.048 "req_id": 1 00:07:21.048 } 00:07:21.048 Got JSON-RPC error response 00:07:21.048 response: 00:07:21.048 { 00:07:21.048 "code": -32603, 00:07:21.048 "message": "Failed to claim CPU core: 2" 00:07:21.048 } 00:07:21.048 13:37:23 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:07:21.048 13:37:23 -- common/autotest_common.sh@643 -- # es=1 00:07:21.048 13:37:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:21.048 13:37:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:21.048 13:37:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:21.048 13:37:23 -- event/cpu_locks.sh@158 -- # waitforlisten 1429399 /var/tmp/spdk.sock 00:07:21.048 13:37:23 -- common/autotest_common.sh@819 -- # '[' -z 1429399 ']' 00:07:21.048 13:37:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.048 13:37:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:21.048 13:37:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.048 13:37:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:21.048 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:07:21.307 13:37:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:21.307 13:37:23 -- common/autotest_common.sh@852 -- # return 0 00:07:21.307 13:37:23 -- event/cpu_locks.sh@159 -- # waitforlisten 1429562 /var/tmp/spdk2.sock 00:07:21.307 13:37:23 -- common/autotest_common.sh@819 -- # '[' -z 1429562 ']' 00:07:21.307 13:37:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.307 13:37:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:21.307 13:37:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.307 13:37:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:21.307 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:07:21.566 13:37:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:21.566 13:37:23 -- common/autotest_common.sh@852 -- # return 0 00:07:21.566 13:37:23 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:21.566 13:37:23 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.566 13:37:23 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.566 13:37:23 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.566 00:07:21.566 real 0m2.089s 00:07:21.566 user 0m0.874s 00:07:21.566 sys 0m0.144s 00:07:21.566 13:37:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.566 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:07:21.566 ************************************ 00:07:21.566 END TEST locking_overlapped_coremask_via_rpc 00:07:21.566 ************************************ 00:07:21.566 13:37:23 -- event/cpu_locks.sh@174 -- # cleanup 00:07:21.566 13:37:23 -- event/cpu_locks.sh@15 -- # [[ -z 1429399 ]] 00:07:21.566 13:37:23 -- event/cpu_locks.sh@15 -- # killprocess 1429399 00:07:21.566 13:37:23 -- common/autotest_common.sh@926 -- # '[' -z 1429399 ']' 00:07:21.566 13:37:23 -- common/autotest_common.sh@930 -- # kill -0 1429399 00:07:21.566 13:37:23 -- common/autotest_common.sh@931 -- # uname 00:07:21.566 13:37:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:21.566 13:37:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1429399 00:07:21.566 13:37:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:21.566 13:37:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:21.566 13:37:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1429399' 00:07:21.566 killing process with pid 1429399 00:07:21.566 13:37:23 -- common/autotest_common.sh@945 -- # kill 1429399 00:07:21.566 13:37:23 -- common/autotest_common.sh@950 -- # wait 1429399 00:07:21.825 13:37:24 -- event/cpu_locks.sh@16 -- # [[ -z 1429562 ]] 00:07:21.825 13:37:24 -- event/cpu_locks.sh@16 -- # killprocess 1429562 00:07:21.825 13:37:24 -- common/autotest_common.sh@926 -- # '[' -z 1429562 ']' 00:07:21.825 13:37:24 -- common/autotest_common.sh@930 -- # kill -0 1429562 00:07:21.825 13:37:24 -- common/autotest_common.sh@931 -- # uname 00:07:21.825 13:37:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:21.825 13:37:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1429562 00:07:21.825 13:37:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:07:21.825 13:37:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:07:21.825 13:37:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1429562' 00:07:21.825 killing process with pid 1429562 00:07:21.825 13:37:24 -- common/autotest_common.sh@945 -- # kill 1429562 00:07:21.825 13:37:24 -- common/autotest_common.sh@950 -- # wait 1429562 00:07:22.394 13:37:24 -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.394 13:37:24 -- event/cpu_locks.sh@1 -- # cleanup 00:07:22.394 13:37:24 -- event/cpu_locks.sh@15 -- # [[ -z 1429399 ]] 00:07:22.394 13:37:24 -- event/cpu_locks.sh@15 -- # killprocess 1429399 00:07:22.394 13:37:24 -- common/autotest_common.sh@926 -- # '[' -z 1429399 ']' 00:07:22.394 13:37:24 -- common/autotest_common.sh@930 -- # kill -0 1429399 00:07:22.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1429399) - No such process 00:07:22.394 13:37:24 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1429399 is not found' 00:07:22.394 Process with pid 1429399 is not found 00:07:22.394 13:37:24 -- event/cpu_locks.sh@16 -- # [[ -z 1429562 ]] 00:07:22.394 13:37:24 -- event/cpu_locks.sh@16 -- # killprocess 1429562 00:07:22.394 13:37:24 -- common/autotest_common.sh@926 -- # '[' -z 1429562 ']' 00:07:22.394 13:37:24 -- common/autotest_common.sh@930 -- # kill -0 1429562 00:07:22.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1429562) - No such process 00:07:22.394 13:37:24 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1429562 is not found' 00:07:22.394 Process with pid 1429562 is not found 00:07:22.394 13:37:24 -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.394 00:07:22.394 real 0m16.227s 00:07:22.394 user 0m28.723s 00:07:22.394 sys 0m4.539s 00:07:22.394 13:37:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.394 13:37:24 -- common/autotest_common.sh@10 -- # set +x 00:07:22.394 ************************************ 00:07:22.394 END TEST cpu_locks 00:07:22.394 ************************************ 00:07:22.394 00:07:22.394 real 0m40.242s 00:07:22.394 user 1m17.125s 00:07:22.394 sys 0m7.584s 00:07:22.394 13:37:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.394 13:37:24 -- common/autotest_common.sh@10 -- # set +x 00:07:22.394 ************************************ 00:07:22.394 END TEST event 00:07:22.394 ************************************ 00:07:22.394 13:37:24 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:22.394 13:37:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:22.394 13:37:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.394 13:37:24 -- common/autotest_common.sh@10 -- # set +x 00:07:22.394 ************************************ 00:07:22.394 START TEST thread 00:07:22.394 ************************************ 00:07:22.394 13:37:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:22.394 * Looking for test storage... 00:07:22.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:22.394 13:37:24 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.394 13:37:24 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:22.394 13:37:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.394 13:37:24 -- common/autotest_common.sh@10 -- # set +x 00:07:22.394 ************************************ 00:07:22.394 START TEST thread_poller_perf 00:07:22.394 ************************************ 00:07:22.394 13:37:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.394 [2024-07-11 13:37:24.751332] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:22.394 [2024-07-11 13:37:24.751405] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430113 ] 00:07:22.394 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.394 [2024-07-11 13:37:24.810636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.394 [2024-07-11 13:37:24.848973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.394 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:23.773 ====================================== 00:07:23.773 busy:2307793438 (cyc) 00:07:23.773 total_run_count: 391000 00:07:23.773 tsc_hz: 2300000000 (cyc) 00:07:23.773 ====================================== 00:07:23.773 poller_cost: 5902 (cyc), 2566 (nsec) 00:07:23.773 00:07:23.773 real 0m1.184s 00:07:23.773 user 0m1.098s 00:07:23.773 sys 0m0.081s 00:07:23.773 13:37:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.773 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.773 ************************************ 00:07:23.773 END TEST thread_poller_perf 00:07:23.773 ************************************ 00:07:23.773 13:37:25 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:23.773 13:37:25 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:23.773 13:37:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.773 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:07:23.773 ************************************ 00:07:23.773 START TEST thread_poller_perf 00:07:23.773 ************************************ 00:07:23.773 13:37:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:23.773 [2024-07-11 13:37:25.967078] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:23.773 [2024-07-11 13:37:25.967149] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430288 ] 00:07:23.773 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.773 [2024-07-11 13:37:26.024277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.773 [2024-07-11 13:37:26.060943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.773 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:24.710 ====================================== 00:07:24.710 busy:2302043944 (cyc) 00:07:24.710 total_run_count: 5481000 00:07:24.710 tsc_hz: 2300000000 (cyc) 00:07:24.710 ====================================== 00:07:24.710 poller_cost: 420 (cyc), 182 (nsec) 00:07:24.710 00:07:24.710 real 0m1.171s 00:07:24.710 user 0m1.092s 00:07:24.710 sys 0m0.076s 00:07:24.710 13:37:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.710 13:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:24.710 ************************************ 00:07:24.710 END TEST thread_poller_perf 00:07:24.710 ************************************ 00:07:24.711 13:37:27 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:24.711 00:07:24.711 real 0m2.514s 00:07:24.711 user 0m2.255s 00:07:24.711 sys 0m0.271s 00:07:24.711 13:37:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.711 13:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:24.711 ************************************ 00:07:24.711 END TEST thread 00:07:24.711 ************************************ 00:07:24.969 13:37:27 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:24.969 13:37:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:24.969 13:37:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.969 13:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:24.969 ************************************ 00:07:24.969 START TEST accel 00:07:24.969 ************************************ 00:07:24.969 13:37:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:24.970 * Looking for test storage... 00:07:24.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:24.970 13:37:27 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:24.970 13:37:27 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:24.970 13:37:27 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:24.970 13:37:27 -- accel/accel.sh@59 -- # spdk_tgt_pid=1430555 00:07:24.970 13:37:27 -- accel/accel.sh@60 -- # waitforlisten 1430555 00:07:24.970 13:37:27 -- common/autotest_common.sh@819 -- # '[' -z 1430555 ']' 00:07:24.970 13:37:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.970 13:37:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:24.970 13:37:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.970 13:37:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:24.970 13:37:27 -- common/autotest_common.sh@10 -- # set +x 00:07:24.970 13:37:27 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:24.970 13:37:27 -- accel/accel.sh@58 -- # build_accel_config 00:07:24.970 13:37:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.970 13:37:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.970 13:37:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.970 13:37:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.970 13:37:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.970 13:37:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.970 13:37:27 -- accel/accel.sh@42 -- # jq -r . 00:07:24.970 [2024-07-11 13:37:27.311198] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:24.970 [2024-07-11 13:37:27.311255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430555 ] 00:07:24.970 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.970 [2024-07-11 13:37:27.364538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.970 [2024-07-11 13:37:27.402546] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:24.970 [2024-07-11 13:37:27.402680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.908 13:37:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:25.908 13:37:28 -- common/autotest_common.sh@852 -- # return 0 00:07:25.908 13:37:28 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:25.908 13:37:28 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:25.908 13:37:28 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:25.908 13:37:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.908 13:37:28 -- common/autotest_common.sh@10 -- # set +x 00:07:25.908 13:37:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # IFS== 00:07:25.908 13:37:28 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.908 13:37:28 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.908 13:37:28 -- accel/accel.sh@67 -- # killprocess 1430555 00:07:25.908 13:37:28 -- common/autotest_common.sh@926 -- # '[' -z 1430555 ']' 00:07:25.908 13:37:28 -- common/autotest_common.sh@930 -- # kill -0 1430555 00:07:25.908 13:37:28 -- common/autotest_common.sh@931 -- # uname 00:07:25.908 13:37:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:25.908 13:37:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1430555 00:07:25.908 13:37:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:25.908 13:37:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:25.908 13:37:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1430555' 00:07:25.908 killing process with pid 1430555 00:07:25.908 13:37:28 -- common/autotest_common.sh@945 -- # kill 1430555 00:07:25.908 13:37:28 -- common/autotest_common.sh@950 -- # wait 1430555 00:07:26.167 13:37:28 -- accel/accel.sh@68 -- # trap - ERR 00:07:26.167 13:37:28 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:26.167 13:37:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:26.167 13:37:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.167 13:37:28 -- common/autotest_common.sh@10 -- # set +x 00:07:26.167 13:37:28 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:07:26.167 13:37:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:26.167 13:37:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.167 13:37:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.167 13:37:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.167 13:37:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.167 13:37:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.167 13:37:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.167 13:37:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.167 13:37:28 -- accel/accel.sh@42 -- # jq -r . 00:07:26.167 13:37:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.167 13:37:28 -- common/autotest_common.sh@10 -- # set +x 00:07:26.167 13:37:28 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:26.167 13:37:28 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:26.167 13:37:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.167 13:37:28 -- common/autotest_common.sh@10 -- # set +x 00:07:26.167 ************************************ 00:07:26.167 START TEST accel_missing_filename 00:07:26.167 ************************************ 00:07:26.167 13:37:28 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:07:26.167 13:37:28 -- common/autotest_common.sh@640 -- # local es=0 00:07:26.167 13:37:28 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:26.167 13:37:28 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:26.167 13:37:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:26.167 13:37:28 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:26.167 13:37:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:26.167 13:37:28 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:07:26.167 13:37:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:26.167 13:37:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.167 13:37:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.167 13:37:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.167 13:37:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.167 13:37:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.167 13:37:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.167 13:37:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.167 13:37:28 -- accel/accel.sh@42 -- # jq -r . 00:07:26.167 [2024-07-11 13:37:28.586724] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:26.167 [2024-07-11 13:37:28.586806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430776 ] 00:07:26.167 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.427 [2024-07-11 13:37:28.641655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.427 [2024-07-11 13:37:28.678650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.427 [2024-07-11 13:37:28.718592] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.427 [2024-07-11 13:37:28.777941] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:26.427 A filename is required. 00:07:26.427 13:37:28 -- common/autotest_common.sh@643 -- # es=234 00:07:26.427 13:37:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:26.427 13:37:28 -- common/autotest_common.sh@652 -- # es=106 00:07:26.427 13:37:28 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:26.427 13:37:28 -- common/autotest_common.sh@660 -- # es=1 00:07:26.427 13:37:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:26.427 00:07:26.427 real 0m0.280s 00:07:26.427 user 0m0.203s 00:07:26.427 sys 0m0.114s 00:07:26.427 13:37:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.427 13:37:28 -- common/autotest_common.sh@10 -- # set +x 00:07:26.427 ************************************ 00:07:26.427 END TEST accel_missing_filename 00:07:26.427 ************************************ 00:07:26.427 13:37:28 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:26.427 13:37:28 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:26.427 13:37:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.427 13:37:28 -- common/autotest_common.sh@10 -- # set +x 00:07:26.427 ************************************ 00:07:26.427 START TEST accel_compress_verify 00:07:26.427 ************************************ 00:07:26.427 13:37:28 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:26.427 13:37:28 -- common/autotest_common.sh@640 -- # local es=0 00:07:26.427 13:37:28 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:26.427 13:37:28 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:26.427 13:37:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:26.427 13:37:28 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:26.427 13:37:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:26.427 13:37:28 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:26.427 13:37:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:26.427 13:37:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.427 13:37:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.427 13:37:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.427 13:37:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.427 13:37:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.427 13:37:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.427 13:37:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.427 13:37:28 -- accel/accel.sh@42 -- # jq -r . 00:07:26.686 [2024-07-11 13:37:28.901477] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:26.686 [2024-07-11 13:37:28.901533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1430946 ] 00:07:26.686 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.686 [2024-07-11 13:37:28.955048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.686 [2024-07-11 13:37:28.991801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.686 [2024-07-11 13:37:29.032332] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.686 [2024-07-11 13:37:29.091734] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:26.946 00:07:26.946 Compression does not support the verify option, aborting. 00:07:26.946 13:37:29 -- common/autotest_common.sh@643 -- # es=161 00:07:26.947 13:37:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:26.947 13:37:29 -- common/autotest_common.sh@652 -- # es=33 00:07:26.947 13:37:29 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:26.947 13:37:29 -- common/autotest_common.sh@660 -- # es=1 00:07:26.947 13:37:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:26.947 00:07:26.947 real 0m0.279s 00:07:26.947 user 0m0.206s 00:07:26.947 sys 0m0.110s 00:07:26.947 13:37:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.947 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:26.947 ************************************ 00:07:26.947 END TEST accel_compress_verify 00:07:26.947 ************************************ 00:07:26.947 13:37:29 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:26.947 13:37:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:26.947 13:37:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.947 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:26.947 ************************************ 00:07:26.947 START TEST accel_wrong_workload 00:07:26.947 ************************************ 00:07:26.947 13:37:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:07:26.947 13:37:29 -- common/autotest_common.sh@640 -- # local es=0 00:07:26.947 13:37:29 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:26.947 13:37:29 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:26.947 13:37:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:26.947 13:37:29 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:26.947 13:37:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:26.947 13:37:29 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:07:26.947 13:37:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:26.947 13:37:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.947 13:37:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.947 13:37:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.947 13:37:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.947 13:37:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.947 13:37:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.947 13:37:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.947 13:37:29 -- accel/accel.sh@42 -- # jq -r . 00:07:26.947 Unsupported workload type: foobar 00:07:26.947 [2024-07-11 13:37:29.218536] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:26.947 accel_perf options: 00:07:26.947 [-h help message] 00:07:26.947 [-q queue depth per core] 00:07:26.947 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:26.947 [-T number of threads per core 00:07:26.947 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:26.947 [-t time in seconds] 00:07:26.947 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:26.947 [ dif_verify, , dif_generate, dif_generate_copy 00:07:26.947 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:26.947 [-l for compress/decompress workloads, name of uncompressed input file 00:07:26.947 [-S for crc32c workload, use this seed value (default 0) 00:07:26.947 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:26.947 [-f for fill workload, use this BYTE value (default 255) 00:07:26.947 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:26.947 [-y verify result if this switch is on] 00:07:26.947 [-a tasks to allocate per core (default: same value as -q)] 00:07:26.947 Can be used to spread operations across a wider range of memory. 00:07:26.947 13:37:29 -- common/autotest_common.sh@643 -- # es=1 00:07:26.947 13:37:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:26.947 13:37:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:26.947 13:37:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:26.947 00:07:26.947 real 0m0.031s 00:07:26.947 user 0m0.019s 00:07:26.947 sys 0m0.012s 00:07:26.947 13:37:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.947 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:26.947 ************************************ 00:07:26.947 END TEST accel_wrong_workload 00:07:26.947 ************************************ 00:07:26.947 Error: writing output failed: Broken pipe 00:07:26.947 13:37:29 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:26.947 13:37:29 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:26.947 13:37:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.947 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:26.947 ************************************ 00:07:26.947 START TEST accel_negative_buffers 00:07:26.947 ************************************ 00:07:26.947 13:37:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:26.947 13:37:29 -- common/autotest_common.sh@640 -- # local es=0 00:07:26.947 13:37:29 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:26.947 13:37:29 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:26.947 13:37:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:26.947 13:37:29 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:26.947 13:37:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:26.947 13:37:29 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:07:26.947 13:37:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:26.947 13:37:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.947 13:37:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.947 13:37:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.947 13:37:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.947 13:37:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.947 13:37:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.947 13:37:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.947 13:37:29 -- accel/accel.sh@42 -- # jq -r . 00:07:26.947 -x option must be non-negative. 00:07:26.947 [2024-07-11 13:37:29.286863] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:26.947 accel_perf options: 00:07:26.947 [-h help message] 00:07:26.947 [-q queue depth per core] 00:07:26.947 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:26.947 [-T number of threads per core 00:07:26.947 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:26.947 [-t time in seconds] 00:07:26.947 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:26.947 [ dif_verify, , dif_generate, dif_generate_copy 00:07:26.947 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:26.947 [-l for compress/decompress workloads, name of uncompressed input file 00:07:26.947 [-S for crc32c workload, use this seed value (default 0) 00:07:26.947 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:26.947 [-f for fill workload, use this BYTE value (default 255) 00:07:26.947 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:26.947 [-y verify result if this switch is on] 00:07:26.947 [-a tasks to allocate per core (default: same value as -q)] 00:07:26.947 Can be used to spread operations across a wider range of memory. 00:07:26.947 13:37:29 -- common/autotest_common.sh@643 -- # es=1 00:07:26.947 13:37:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:26.947 13:37:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:26.947 13:37:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:26.947 00:07:26.947 real 0m0.032s 00:07:26.947 user 0m0.018s 00:07:26.947 sys 0m0.014s 00:07:26.947 13:37:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.947 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:26.947 ************************************ 00:07:26.947 END TEST accel_negative_buffers 00:07:26.947 ************************************ 00:07:26.947 Error: writing output failed: Broken pipe 00:07:26.947 13:37:29 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:26.947 13:37:29 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:26.947 13:37:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.947 13:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:26.947 ************************************ 00:07:26.947 START TEST accel_crc32c 00:07:26.947 ************************************ 00:07:26.947 13:37:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:26.947 13:37:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.947 13:37:29 -- accel/accel.sh@17 -- # local accel_module 00:07:26.947 13:37:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:26.947 13:37:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:26.947 13:37:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.947 13:37:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.947 13:37:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.947 13:37:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.947 13:37:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.947 13:37:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.947 13:37:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.947 13:37:29 -- accel/accel.sh@42 -- # jq -r . 00:07:26.947 [2024-07-11 13:37:29.358675] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:26.947 [2024-07-11 13:37:29.358749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431008 ] 00:07:26.947 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.242 [2024-07-11 13:37:29.417609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.242 [2024-07-11 13:37:29.462998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.182 13:37:30 -- accel/accel.sh@18 -- # out=' 00:07:28.182 SPDK Configuration: 00:07:28.182 Core mask: 0x1 00:07:28.182 00:07:28.182 Accel Perf Configuration: 00:07:28.182 Workload Type: crc32c 00:07:28.182 CRC-32C seed: 32 00:07:28.182 Transfer size: 4096 bytes 00:07:28.182 Vector count 1 00:07:28.182 Module: software 00:07:28.182 Queue depth: 32 00:07:28.182 Allocate depth: 32 00:07:28.182 # threads/core: 1 00:07:28.182 Run time: 1 seconds 00:07:28.182 Verify: Yes 00:07:28.182 00:07:28.182 Running for 1 seconds... 00:07:28.182 00:07:28.182 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.182 ------------------------------------------------------------------------------------ 00:07:28.182 0,0 568416/s 2220 MiB/s 0 0 00:07:28.182 ==================================================================================== 00:07:28.182 Total 568416/s 2220 MiB/s 0 0' 00:07:28.182 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.182 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.182 13:37:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:28.182 13:37:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:28.441 13:37:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.441 13:37:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.441 13:37:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.441 13:37:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.441 13:37:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.441 13:37:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.441 13:37:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.441 13:37:30 -- accel/accel.sh@42 -- # jq -r . 00:07:28.441 [2024-07-11 13:37:30.660179] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:28.441 [2024-07-11 13:37:30.660256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431244 ] 00:07:28.441 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.441 [2024-07-11 13:37:30.714573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.441 [2024-07-11 13:37:30.750699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.441 13:37:30 -- accel/accel.sh@21 -- # val= 00:07:28.441 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.441 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.441 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.441 13:37:30 -- accel/accel.sh@21 -- # val= 00:07:28.441 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.441 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.441 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.441 13:37:30 -- accel/accel.sh@21 -- # val=0x1 00:07:28.441 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.441 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.441 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.441 13:37:30 -- accel/accel.sh@21 -- # val= 00:07:28.441 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.441 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.441 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.441 13:37:30 -- accel/accel.sh@21 -- # val= 00:07:28.441 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.441 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.441 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.441 13:37:30 -- accel/accel.sh@21 -- # val=crc32c 00:07:28.441 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.441 13:37:30 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:28.441 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.442 13:37:30 -- accel/accel.sh@21 -- # val=32 00:07:28.442 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.442 13:37:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.442 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.442 13:37:30 -- accel/accel.sh@21 -- # val= 00:07:28.442 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.442 13:37:30 -- accel/accel.sh@21 -- # val=software 00:07:28.442 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.442 13:37:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.442 13:37:30 -- accel/accel.sh@21 -- # val=32 00:07:28.442 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.442 13:37:30 -- accel/accel.sh@21 -- # val=32 00:07:28.442 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.442 13:37:30 -- accel/accel.sh@21 -- # val=1 00:07:28.442 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.442 13:37:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.442 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.442 13:37:30 -- accel/accel.sh@21 -- # val=Yes 00:07:28.442 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.442 13:37:30 -- accel/accel.sh@21 -- # val= 00:07:28.442 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:28.442 13:37:30 -- accel/accel.sh@21 -- # val= 00:07:28.442 13:37:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # IFS=: 00:07:28.442 13:37:30 -- accel/accel.sh@20 -- # read -r var val 00:07:29.820 13:37:31 -- accel/accel.sh@21 -- # val= 00:07:29.820 13:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.820 13:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:29.820 13:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:29.820 13:37:31 -- accel/accel.sh@21 -- # val= 00:07:29.820 13:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.820 13:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:29.820 13:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:29.820 13:37:31 -- accel/accel.sh@21 -- # val= 00:07:29.820 13:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.820 13:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:29.820 13:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:29.820 13:37:31 -- accel/accel.sh@21 -- # val= 00:07:29.820 13:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.820 13:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:29.820 13:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:29.820 13:37:31 -- accel/accel.sh@21 -- # val= 00:07:29.820 13:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.820 13:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:29.820 13:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:29.820 13:37:31 -- accel/accel.sh@21 -- # val= 00:07:29.820 13:37:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.820 13:37:31 -- accel/accel.sh@20 -- # IFS=: 00:07:29.820 13:37:31 -- accel/accel.sh@20 -- # read -r var val 00:07:29.820 13:37:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.820 13:37:31 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:29.820 13:37:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.820 00:07:29.820 real 0m2.592s 00:07:29.820 user 0m2.358s 00:07:29.820 sys 0m0.243s 00:07:29.820 13:37:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.820 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:07:29.820 ************************************ 00:07:29.820 END TEST accel_crc32c 00:07:29.820 ************************************ 00:07:29.820 13:37:31 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:29.820 13:37:31 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:29.820 13:37:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.820 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:07:29.820 ************************************ 00:07:29.820 START TEST accel_crc32c_C2 00:07:29.820 ************************************ 00:07:29.820 13:37:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:29.820 13:37:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.820 13:37:31 -- accel/accel.sh@17 -- # local accel_module 00:07:29.820 13:37:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:29.820 13:37:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:29.820 13:37:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.820 13:37:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.820 13:37:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.820 13:37:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.820 13:37:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.820 13:37:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.820 13:37:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.820 13:37:31 -- accel/accel.sh@42 -- # jq -r . 00:07:29.820 [2024-07-11 13:37:31.986949] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:29.820 [2024-07-11 13:37:31.987008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431491 ] 00:07:29.820 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.820 [2024-07-11 13:37:32.040618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.820 [2024-07-11 13:37:32.077375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.201 13:37:33 -- accel/accel.sh@18 -- # out=' 00:07:31.201 SPDK Configuration: 00:07:31.201 Core mask: 0x1 00:07:31.201 00:07:31.201 Accel Perf Configuration: 00:07:31.201 Workload Type: crc32c 00:07:31.201 CRC-32C seed: 0 00:07:31.201 Transfer size: 4096 bytes 00:07:31.201 Vector count 2 00:07:31.201 Module: software 00:07:31.201 Queue depth: 32 00:07:31.201 Allocate depth: 32 00:07:31.201 # threads/core: 1 00:07:31.201 Run time: 1 seconds 00:07:31.201 Verify: Yes 00:07:31.201 00:07:31.201 Running for 1 seconds... 00:07:31.201 00:07:31.201 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.201 ------------------------------------------------------------------------------------ 00:07:31.201 0,0 450016/s 3515 MiB/s 0 0 00:07:31.201 ==================================================================================== 00:07:31.201 Total 450016/s 1757 MiB/s 0 0' 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:31.201 13:37:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:31.201 13:37:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.201 13:37:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.201 13:37:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.201 13:37:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.201 13:37:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.201 13:37:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.201 13:37:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.201 13:37:33 -- accel/accel.sh@42 -- # jq -r . 00:07:31.201 [2024-07-11 13:37:33.269925] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:31.201 [2024-07-11 13:37:33.270002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431731 ] 00:07:31.201 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.201 [2024-07-11 13:37:33.323882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.201 [2024-07-11 13:37:33.359817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val= 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val= 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val=0x1 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val= 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val= 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val=crc32c 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val=0 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val= 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val=software 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val=32 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val=32 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val=1 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val=Yes 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val= 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:31.201 13:37:33 -- accel/accel.sh@21 -- # val= 00:07:31.201 13:37:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # IFS=: 00:07:31.201 13:37:33 -- accel/accel.sh@20 -- # read -r var val 00:07:32.135 13:37:34 -- accel/accel.sh@21 -- # val= 00:07:32.135 13:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.135 13:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.135 13:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.135 13:37:34 -- accel/accel.sh@21 -- # val= 00:07:32.135 13:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.135 13:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.135 13:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.135 13:37:34 -- accel/accel.sh@21 -- # val= 00:07:32.135 13:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.135 13:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.135 13:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.135 13:37:34 -- accel/accel.sh@21 -- # val= 00:07:32.135 13:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.135 13:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.135 13:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.135 13:37:34 -- accel/accel.sh@21 -- # val= 00:07:32.135 13:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.135 13:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.135 13:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.135 13:37:34 -- accel/accel.sh@21 -- # val= 00:07:32.135 13:37:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.136 13:37:34 -- accel/accel.sh@20 -- # IFS=: 00:07:32.136 13:37:34 -- accel/accel.sh@20 -- # read -r var val 00:07:32.136 13:37:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.136 13:37:34 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:32.136 13:37:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.136 00:07:32.136 real 0m2.570s 00:07:32.136 user 0m2.353s 00:07:32.136 sys 0m0.223s 00:07:32.136 13:37:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.136 13:37:34 -- common/autotest_common.sh@10 -- # set +x 00:07:32.136 ************************************ 00:07:32.136 END TEST accel_crc32c_C2 00:07:32.136 ************************************ 00:07:32.136 13:37:34 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:32.136 13:37:34 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:32.136 13:37:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.136 13:37:34 -- common/autotest_common.sh@10 -- # set +x 00:07:32.136 ************************************ 00:07:32.136 START TEST accel_copy 00:07:32.136 ************************************ 00:07:32.136 13:37:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:32.136 13:37:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.136 13:37:34 -- accel/accel.sh@17 -- # local accel_module 00:07:32.136 13:37:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:32.136 13:37:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:32.136 13:37:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.136 13:37:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.136 13:37:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.136 13:37:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.136 13:37:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.136 13:37:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.136 13:37:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.136 13:37:34 -- accel/accel.sh@42 -- # jq -r . 00:07:32.394 [2024-07-11 13:37:34.595312] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:32.394 [2024-07-11 13:37:34.595369] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431980 ] 00:07:32.394 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.394 [2024-07-11 13:37:34.649130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.394 [2024-07-11 13:37:34.686318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.771 13:37:35 -- accel/accel.sh@18 -- # out=' 00:07:33.771 SPDK Configuration: 00:07:33.771 Core mask: 0x1 00:07:33.771 00:07:33.771 Accel Perf Configuration: 00:07:33.771 Workload Type: copy 00:07:33.771 Transfer size: 4096 bytes 00:07:33.771 Vector count 1 00:07:33.771 Module: software 00:07:33.771 Queue depth: 32 00:07:33.771 Allocate depth: 32 00:07:33.771 # threads/core: 1 00:07:33.771 Run time: 1 seconds 00:07:33.771 Verify: Yes 00:07:33.771 00:07:33.771 Running for 1 seconds... 00:07:33.771 00:07:33.771 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.771 ------------------------------------------------------------------------------------ 00:07:33.771 0,0 424736/s 1659 MiB/s 0 0 00:07:33.771 ==================================================================================== 00:07:33.771 Total 424736/s 1659 MiB/s 0 0' 00:07:33.771 13:37:35 -- accel/accel.sh@20 -- # IFS=: 00:07:33.771 13:37:35 -- accel/accel.sh@20 -- # read -r var val 00:07:33.771 13:37:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:33.771 13:37:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:33.771 13:37:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.771 13:37:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.771 13:37:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.771 13:37:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.771 13:37:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.771 13:37:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.771 13:37:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.771 13:37:35 -- accel/accel.sh@42 -- # jq -r . 00:07:33.771 [2024-07-11 13:37:35.878531] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:33.771 [2024-07-11 13:37:35.878608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432216 ] 00:07:33.771 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.771 [2024-07-11 13:37:35.932469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.771 [2024-07-11 13:37:35.967953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.771 13:37:36 -- accel/accel.sh@21 -- # val= 00:07:33.771 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.771 13:37:36 -- accel/accel.sh@21 -- # val= 00:07:33.771 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.771 13:37:36 -- accel/accel.sh@21 -- # val=0x1 00:07:33.771 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.771 13:37:36 -- accel/accel.sh@21 -- # val= 00:07:33.771 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.771 13:37:36 -- accel/accel.sh@21 -- # val= 00:07:33.771 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.771 13:37:36 -- accel/accel.sh@21 -- # val=copy 00:07:33.771 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.771 13:37:36 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.771 13:37:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.771 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.771 13:37:36 -- accel/accel.sh@21 -- # val= 00:07:33.771 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.771 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.771 13:37:36 -- accel/accel.sh@21 -- # val=software 00:07:33.771 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.772 13:37:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.772 13:37:36 -- accel/accel.sh@21 -- # val=32 00:07:33.772 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.772 13:37:36 -- accel/accel.sh@21 -- # val=32 00:07:33.772 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.772 13:37:36 -- accel/accel.sh@21 -- # val=1 00:07:33.772 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.772 13:37:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.772 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.772 13:37:36 -- accel/accel.sh@21 -- # val=Yes 00:07:33.772 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.772 13:37:36 -- accel/accel.sh@21 -- # val= 00:07:33.772 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:33.772 13:37:36 -- accel/accel.sh@21 -- # val= 00:07:33.772 13:37:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # IFS=: 00:07:33.772 13:37:36 -- accel/accel.sh@20 -- # read -r var val 00:07:34.708 13:37:37 -- accel/accel.sh@21 -- # val= 00:07:34.708 13:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.708 13:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.708 13:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.708 13:37:37 -- accel/accel.sh@21 -- # val= 00:07:34.708 13:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.708 13:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.708 13:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.708 13:37:37 -- accel/accel.sh@21 -- # val= 00:07:34.708 13:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.708 13:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.708 13:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.708 13:37:37 -- accel/accel.sh@21 -- # val= 00:07:34.708 13:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.708 13:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.708 13:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.708 13:37:37 -- accel/accel.sh@21 -- # val= 00:07:34.708 13:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.708 13:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.708 13:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.708 13:37:37 -- accel/accel.sh@21 -- # val= 00:07:34.708 13:37:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.708 13:37:37 -- accel/accel.sh@20 -- # IFS=: 00:07:34.708 13:37:37 -- accel/accel.sh@20 -- # read -r var val 00:07:34.708 13:37:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.708 13:37:37 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:34.708 13:37:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.708 00:07:34.708 real 0m2.570s 00:07:34.708 user 0m2.362s 00:07:34.708 sys 0m0.215s 00:07:34.708 13:37:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.708 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:07:34.708 ************************************ 00:07:34.708 END TEST accel_copy 00:07:34.708 ************************************ 00:07:34.967 13:37:37 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:34.967 13:37:37 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:34.967 13:37:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.967 13:37:37 -- common/autotest_common.sh@10 -- # set +x 00:07:34.967 ************************************ 00:07:34.967 START TEST accel_fill 00:07:34.967 ************************************ 00:07:34.967 13:37:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:34.967 13:37:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.967 13:37:37 -- accel/accel.sh@17 -- # local accel_module 00:07:34.967 13:37:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:34.967 13:37:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:34.967 13:37:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.967 13:37:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.967 13:37:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.967 13:37:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.967 13:37:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.967 13:37:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.967 13:37:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.967 13:37:37 -- accel/accel.sh@42 -- # jq -r . 00:07:34.967 [2024-07-11 13:37:37.204075] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:34.967 [2024-07-11 13:37:37.204149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432466 ] 00:07:34.967 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.967 [2024-07-11 13:37:37.258144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.967 [2024-07-11 13:37:37.294717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.345 13:37:38 -- accel/accel.sh@18 -- # out=' 00:07:36.345 SPDK Configuration: 00:07:36.345 Core mask: 0x1 00:07:36.345 00:07:36.345 Accel Perf Configuration: 00:07:36.345 Workload Type: fill 00:07:36.345 Fill pattern: 0x80 00:07:36.345 Transfer size: 4096 bytes 00:07:36.345 Vector count 1 00:07:36.345 Module: software 00:07:36.345 Queue depth: 64 00:07:36.345 Allocate depth: 64 00:07:36.345 # threads/core: 1 00:07:36.345 Run time: 1 seconds 00:07:36.345 Verify: Yes 00:07:36.345 00:07:36.345 Running for 1 seconds... 00:07:36.345 00:07:36.345 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.345 ------------------------------------------------------------------------------------ 00:07:36.345 0,0 662336/s 2587 MiB/s 0 0 00:07:36.345 ==================================================================================== 00:07:36.345 Total 662336/s 2587 MiB/s 0 0' 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.345 13:37:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:36.345 13:37:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:36.345 13:37:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.345 13:37:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.345 13:37:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.345 13:37:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.345 13:37:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.345 13:37:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.345 13:37:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.345 13:37:38 -- accel/accel.sh@42 -- # jq -r . 00:07:36.345 [2024-07-11 13:37:38.487001] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:36.345 [2024-07-11 13:37:38.487083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432706 ] 00:07:36.345 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.345 [2024-07-11 13:37:38.542057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.345 [2024-07-11 13:37:38.578122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.345 13:37:38 -- accel/accel.sh@21 -- # val= 00:07:36.345 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.345 13:37:38 -- accel/accel.sh@21 -- # val= 00:07:36.345 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.345 13:37:38 -- accel/accel.sh@21 -- # val=0x1 00:07:36.345 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.345 13:37:38 -- accel/accel.sh@21 -- # val= 00:07:36.345 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.345 13:37:38 -- accel/accel.sh@21 -- # val= 00:07:36.345 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.345 13:37:38 -- accel/accel.sh@21 -- # val=fill 00:07:36.345 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.345 13:37:38 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.345 13:37:38 -- accel/accel.sh@21 -- # val=0x80 00:07:36.345 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.345 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.345 13:37:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.345 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.346 13:37:38 -- accel/accel.sh@21 -- # val= 00:07:36.346 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.346 13:37:38 -- accel/accel.sh@21 -- # val=software 00:07:36.346 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.346 13:37:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.346 13:37:38 -- accel/accel.sh@21 -- # val=64 00:07:36.346 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.346 13:37:38 -- accel/accel.sh@21 -- # val=64 00:07:36.346 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.346 13:37:38 -- accel/accel.sh@21 -- # val=1 00:07:36.346 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.346 13:37:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.346 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.346 13:37:38 -- accel/accel.sh@21 -- # val=Yes 00:07:36.346 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.346 13:37:38 -- accel/accel.sh@21 -- # val= 00:07:36.346 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:36.346 13:37:38 -- accel/accel.sh@21 -- # val= 00:07:36.346 13:37:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # IFS=: 00:07:36.346 13:37:38 -- accel/accel.sh@20 -- # read -r var val 00:07:37.725 13:37:39 -- accel/accel.sh@21 -- # val= 00:07:37.725 13:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.725 13:37:39 -- accel/accel.sh@20 -- # IFS=: 00:07:37.725 13:37:39 -- accel/accel.sh@20 -- # read -r var val 00:07:37.725 13:37:39 -- accel/accel.sh@21 -- # val= 00:07:37.725 13:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.725 13:37:39 -- accel/accel.sh@20 -- # IFS=: 00:07:37.725 13:37:39 -- accel/accel.sh@20 -- # read -r var val 00:07:37.725 13:37:39 -- accel/accel.sh@21 -- # val= 00:07:37.725 13:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.725 13:37:39 -- accel/accel.sh@20 -- # IFS=: 00:07:37.725 13:37:39 -- accel/accel.sh@20 -- # read -r var val 00:07:37.725 13:37:39 -- accel/accel.sh@21 -- # val= 00:07:37.725 13:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.725 13:37:39 -- accel/accel.sh@20 -- # IFS=: 00:07:37.725 13:37:39 -- accel/accel.sh@20 -- # read -r var val 00:07:37.725 13:37:39 -- accel/accel.sh@21 -- # val= 00:07:37.725 13:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.725 13:37:39 -- accel/accel.sh@20 -- # IFS=: 00:07:37.725 13:37:39 -- accel/accel.sh@20 -- # read -r var val 00:07:37.725 13:37:39 -- accel/accel.sh@21 -- # val= 00:07:37.725 13:37:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.725 13:37:39 -- accel/accel.sh@20 -- # IFS=: 00:07:37.725 13:37:39 -- accel/accel.sh@20 -- # read -r var val 00:07:37.725 13:37:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.725 13:37:39 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:37.725 13:37:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.725 00:07:37.725 real 0m2.572s 00:07:37.725 user 0m2.365s 00:07:37.725 sys 0m0.215s 00:07:37.726 13:37:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.726 13:37:39 -- common/autotest_common.sh@10 -- # set +x 00:07:37.726 ************************************ 00:07:37.726 END TEST accel_fill 00:07:37.726 ************************************ 00:07:37.726 13:37:39 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:37.726 13:37:39 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:37.726 13:37:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.726 13:37:39 -- common/autotest_common.sh@10 -- # set +x 00:07:37.726 ************************************ 00:07:37.726 START TEST accel_copy_crc32c 00:07:37.726 ************************************ 00:07:37.726 13:37:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:37.726 13:37:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.726 13:37:39 -- accel/accel.sh@17 -- # local accel_module 00:07:37.726 13:37:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:37.726 13:37:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:37.726 13:37:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.726 13:37:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.726 13:37:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.726 13:37:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.726 13:37:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.726 13:37:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.726 13:37:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.726 13:37:39 -- accel/accel.sh@42 -- # jq -r . 00:07:37.726 [2024-07-11 13:37:39.809442] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:37.726 [2024-07-11 13:37:39.809515] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432954 ] 00:07:37.726 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.726 [2024-07-11 13:37:39.865108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.726 [2024-07-11 13:37:39.902126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.663 13:37:41 -- accel/accel.sh@18 -- # out=' 00:07:38.663 SPDK Configuration: 00:07:38.663 Core mask: 0x1 00:07:38.663 00:07:38.663 Accel Perf Configuration: 00:07:38.663 Workload Type: copy_crc32c 00:07:38.663 CRC-32C seed: 0 00:07:38.663 Vector size: 4096 bytes 00:07:38.663 Transfer size: 4096 bytes 00:07:38.663 Vector count 1 00:07:38.663 Module: software 00:07:38.663 Queue depth: 32 00:07:38.663 Allocate depth: 32 00:07:38.663 # threads/core: 1 00:07:38.663 Run time: 1 seconds 00:07:38.663 Verify: Yes 00:07:38.663 00:07:38.663 Running for 1 seconds... 00:07:38.663 00:07:38.663 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.663 ------------------------------------------------------------------------------------ 00:07:38.663 0,0 324736/s 1268 MiB/s 0 0 00:07:38.663 ==================================================================================== 00:07:38.663 Total 324736/s 1268 MiB/s 0 0' 00:07:38.663 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.663 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.663 13:37:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:38.663 13:37:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:38.663 13:37:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.663 13:37:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.663 13:37:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.663 13:37:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.663 13:37:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.663 13:37:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.663 13:37:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.663 13:37:41 -- accel/accel.sh@42 -- # jq -r . 00:07:38.663 [2024-07-11 13:37:41.097491] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:38.663 [2024-07-11 13:37:41.097569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433188 ] 00:07:38.923 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.923 [2024-07-11 13:37:41.152581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.923 [2024-07-11 13:37:41.188810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val= 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val= 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val=0x1 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val= 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val= 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val=0 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val= 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val=software 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val=32 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val=32 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val=1 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val=Yes 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val= 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:38.923 13:37:41 -- accel/accel.sh@21 -- # val= 00:07:38.923 13:37:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # IFS=: 00:07:38.923 13:37:41 -- accel/accel.sh@20 -- # read -r var val 00:07:40.311 13:37:42 -- accel/accel.sh@21 -- # val= 00:07:40.311 13:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.311 13:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:40.311 13:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:40.311 13:37:42 -- accel/accel.sh@21 -- # val= 00:07:40.311 13:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.311 13:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:40.311 13:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:40.311 13:37:42 -- accel/accel.sh@21 -- # val= 00:07:40.311 13:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.311 13:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:40.311 13:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:40.311 13:37:42 -- accel/accel.sh@21 -- # val= 00:07:40.311 13:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.311 13:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:40.311 13:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:40.311 13:37:42 -- accel/accel.sh@21 -- # val= 00:07:40.311 13:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.311 13:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:40.311 13:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:40.311 13:37:42 -- accel/accel.sh@21 -- # val= 00:07:40.311 13:37:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.311 13:37:42 -- accel/accel.sh@20 -- # IFS=: 00:07:40.311 13:37:42 -- accel/accel.sh@20 -- # read -r var val 00:07:40.311 13:37:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.311 13:37:42 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:40.311 13:37:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.311 00:07:40.311 real 0m2.579s 00:07:40.311 user 0m2.360s 00:07:40.311 sys 0m0.227s 00:07:40.311 13:37:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.311 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:07:40.311 ************************************ 00:07:40.311 END TEST accel_copy_crc32c 00:07:40.311 ************************************ 00:07:40.311 13:37:42 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:40.311 13:37:42 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:40.311 13:37:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.311 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:07:40.311 ************************************ 00:07:40.311 START TEST accel_copy_crc32c_C2 00:07:40.311 ************************************ 00:07:40.311 13:37:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:40.311 13:37:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.311 13:37:42 -- accel/accel.sh@17 -- # local accel_module 00:07:40.311 13:37:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:40.311 13:37:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:40.311 13:37:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.311 13:37:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.311 13:37:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.311 13:37:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.311 13:37:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.311 13:37:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.311 13:37:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.311 13:37:42 -- accel/accel.sh@42 -- # jq -r . 00:07:40.311 [2024-07-11 13:37:42.427615] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:40.311 [2024-07-11 13:37:42.427681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433435 ] 00:07:40.311 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.311 [2024-07-11 13:37:42.482089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.311 [2024-07-11 13:37:42.519004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.260 13:37:43 -- accel/accel.sh@18 -- # out=' 00:07:41.260 SPDK Configuration: 00:07:41.260 Core mask: 0x1 00:07:41.260 00:07:41.260 Accel Perf Configuration: 00:07:41.260 Workload Type: copy_crc32c 00:07:41.260 CRC-32C seed: 0 00:07:41.260 Vector size: 4096 bytes 00:07:41.260 Transfer size: 8192 bytes 00:07:41.260 Vector count 2 00:07:41.260 Module: software 00:07:41.260 Queue depth: 32 00:07:41.260 Allocate depth: 32 00:07:41.260 # threads/core: 1 00:07:41.260 Run time: 1 seconds 00:07:41.260 Verify: Yes 00:07:41.260 00:07:41.260 Running for 1 seconds... 00:07:41.260 00:07:41.260 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.260 ------------------------------------------------------------------------------------ 00:07:41.260 0,0 235328/s 1838 MiB/s 0 0 00:07:41.260 ==================================================================================== 00:07:41.260 Total 235328/s 919 MiB/s 0 0' 00:07:41.260 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.260 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.260 13:37:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:41.260 13:37:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:41.260 13:37:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.260 13:37:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.260 13:37:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.260 13:37:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.260 13:37:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.260 13:37:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.260 13:37:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.260 13:37:43 -- accel/accel.sh@42 -- # jq -r . 00:07:41.525 [2024-07-11 13:37:43.712572] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:41.525 [2024-07-11 13:37:43.712641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433608 ] 00:07:41.525 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.525 [2024-07-11 13:37:43.768808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.525 [2024-07-11 13:37:43.805031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val= 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val= 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val=0x1 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val= 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val= 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val=0 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val= 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val=software 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val=32 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val=32 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val=1 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val=Yes 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val= 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:41.525 13:37:43 -- accel/accel.sh@21 -- # val= 00:07:41.525 13:37:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # IFS=: 00:07:41.525 13:37:43 -- accel/accel.sh@20 -- # read -r var val 00:07:42.904 13:37:44 -- accel/accel.sh@21 -- # val= 00:07:42.904 13:37:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.904 13:37:44 -- accel/accel.sh@20 -- # IFS=: 00:07:42.904 13:37:44 -- accel/accel.sh@20 -- # read -r var val 00:07:42.904 13:37:44 -- accel/accel.sh@21 -- # val= 00:07:42.904 13:37:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.904 13:37:44 -- accel/accel.sh@20 -- # IFS=: 00:07:42.904 13:37:44 -- accel/accel.sh@20 -- # read -r var val 00:07:42.904 13:37:44 -- accel/accel.sh@21 -- # val= 00:07:42.904 13:37:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.904 13:37:44 -- accel/accel.sh@20 -- # IFS=: 00:07:42.904 13:37:44 -- accel/accel.sh@20 -- # read -r var val 00:07:42.904 13:37:44 -- accel/accel.sh@21 -- # val= 00:07:42.904 13:37:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.904 13:37:44 -- accel/accel.sh@20 -- # IFS=: 00:07:42.904 13:37:44 -- accel/accel.sh@20 -- # read -r var val 00:07:42.904 13:37:44 -- accel/accel.sh@21 -- # val= 00:07:42.904 13:37:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.904 13:37:44 -- accel/accel.sh@20 -- # IFS=: 00:07:42.904 13:37:44 -- accel/accel.sh@20 -- # read -r var val 00:07:42.904 13:37:44 -- accel/accel.sh@21 -- # val= 00:07:42.904 13:37:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.904 13:37:44 -- accel/accel.sh@20 -- # IFS=: 00:07:42.904 13:37:44 -- accel/accel.sh@20 -- # read -r var val 00:07:42.904 13:37:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:42.904 13:37:44 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:42.904 13:37:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.904 00:07:42.904 real 0m2.577s 00:07:42.904 user 0m2.365s 00:07:42.904 sys 0m0.221s 00:07:42.904 13:37:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.904 13:37:44 -- common/autotest_common.sh@10 -- # set +x 00:07:42.904 ************************************ 00:07:42.904 END TEST accel_copy_crc32c_C2 00:07:42.904 ************************************ 00:07:42.904 13:37:45 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:42.904 13:37:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:42.904 13:37:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.904 13:37:45 -- common/autotest_common.sh@10 -- # set +x 00:07:42.904 ************************************ 00:07:42.904 START TEST accel_dualcast 00:07:42.904 ************************************ 00:07:42.904 13:37:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:42.904 13:37:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.904 13:37:45 -- accel/accel.sh@17 -- # local accel_module 00:07:42.904 13:37:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:42.904 13:37:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:42.904 13:37:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.904 13:37:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.904 13:37:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.904 13:37:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.904 13:37:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.904 13:37:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.904 13:37:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.904 13:37:45 -- accel/accel.sh@42 -- # jq -r . 00:07:42.904 [2024-07-11 13:37:45.043522] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:42.904 [2024-07-11 13:37:45.043581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433830 ] 00:07:42.904 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.904 [2024-07-11 13:37:45.097409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.904 [2024-07-11 13:37:45.134369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.284 13:37:46 -- accel/accel.sh@18 -- # out=' 00:07:44.284 SPDK Configuration: 00:07:44.284 Core mask: 0x1 00:07:44.284 00:07:44.285 Accel Perf Configuration: 00:07:44.285 Workload Type: dualcast 00:07:44.285 Transfer size: 4096 bytes 00:07:44.285 Vector count 1 00:07:44.285 Module: software 00:07:44.285 Queue depth: 32 00:07:44.285 Allocate depth: 32 00:07:44.285 # threads/core: 1 00:07:44.285 Run time: 1 seconds 00:07:44.285 Verify: Yes 00:07:44.285 00:07:44.285 Running for 1 seconds... 00:07:44.285 00:07:44.285 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.285 ------------------------------------------------------------------------------------ 00:07:44.285 0,0 504384/s 1970 MiB/s 0 0 00:07:44.285 ==================================================================================== 00:07:44.285 Total 504384/s 1970 MiB/s 0 0' 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:44.285 13:37:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:44.285 13:37:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.285 13:37:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.285 13:37:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.285 13:37:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.285 13:37:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.285 13:37:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.285 13:37:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.285 13:37:46 -- accel/accel.sh@42 -- # jq -r . 00:07:44.285 [2024-07-11 13:37:46.327360] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:44.285 [2024-07-11 13:37:46.327421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434008 ] 00:07:44.285 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.285 [2024-07-11 13:37:46.381089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.285 [2024-07-11 13:37:46.418546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val= 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val= 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val=0x1 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val= 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val= 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val=dualcast 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val= 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val=software 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val=32 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val=32 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val=1 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val=Yes 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val= 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:44.285 13:37:46 -- accel/accel.sh@21 -- # val= 00:07:44.285 13:37:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # IFS=: 00:07:44.285 13:37:46 -- accel/accel.sh@20 -- # read -r var val 00:07:45.303 13:37:47 -- accel/accel.sh@21 -- # val= 00:07:45.303 13:37:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.303 13:37:47 -- accel/accel.sh@20 -- # IFS=: 00:07:45.303 13:37:47 -- accel/accel.sh@20 -- # read -r var val 00:07:45.303 13:37:47 -- accel/accel.sh@21 -- # val= 00:07:45.303 13:37:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.303 13:37:47 -- accel/accel.sh@20 -- # IFS=: 00:07:45.303 13:37:47 -- accel/accel.sh@20 -- # read -r var val 00:07:45.303 13:37:47 -- accel/accel.sh@21 -- # val= 00:07:45.303 13:37:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.303 13:37:47 -- accel/accel.sh@20 -- # IFS=: 00:07:45.303 13:37:47 -- accel/accel.sh@20 -- # read -r var val 00:07:45.303 13:37:47 -- accel/accel.sh@21 -- # val= 00:07:45.303 13:37:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.303 13:37:47 -- accel/accel.sh@20 -- # IFS=: 00:07:45.303 13:37:47 -- accel/accel.sh@20 -- # read -r var val 00:07:45.303 13:37:47 -- accel/accel.sh@21 -- # val= 00:07:45.303 13:37:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.303 13:37:47 -- accel/accel.sh@20 -- # IFS=: 00:07:45.303 13:37:47 -- accel/accel.sh@20 -- # read -r var val 00:07:45.303 13:37:47 -- accel/accel.sh@21 -- # val= 00:07:45.303 13:37:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.303 13:37:47 -- accel/accel.sh@20 -- # IFS=: 00:07:45.303 13:37:47 -- accel/accel.sh@20 -- # read -r var val 00:07:45.303 13:37:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.303 13:37:47 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:45.303 13:37:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.303 00:07:45.303 real 0m2.574s 00:07:45.303 user 0m2.363s 00:07:45.303 sys 0m0.218s 00:07:45.303 13:37:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.303 13:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:45.303 ************************************ 00:07:45.303 END TEST accel_dualcast 00:07:45.303 ************************************ 00:07:45.303 13:37:47 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:45.303 13:37:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:45.303 13:37:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.303 13:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:45.303 ************************************ 00:07:45.303 START TEST accel_compare 00:07:45.303 ************************************ 00:07:45.303 13:37:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:45.303 13:37:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.303 13:37:47 -- accel/accel.sh@17 -- # local accel_module 00:07:45.303 13:37:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:45.303 13:37:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:45.303 13:37:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.303 13:37:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.303 13:37:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.303 13:37:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.303 13:37:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.303 13:37:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.303 13:37:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.303 13:37:47 -- accel/accel.sh@42 -- # jq -r . 00:07:45.303 [2024-07-11 13:37:47.657660] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:45.303 [2024-07-11 13:37:47.657736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434241 ] 00:07:45.303 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.303 [2024-07-11 13:37:47.713418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.303 [2024-07-11 13:37:47.750382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.682 13:37:48 -- accel/accel.sh@18 -- # out=' 00:07:46.682 SPDK Configuration: 00:07:46.682 Core mask: 0x1 00:07:46.682 00:07:46.682 Accel Perf Configuration: 00:07:46.682 Workload Type: compare 00:07:46.682 Transfer size: 4096 bytes 00:07:46.682 Vector count 1 00:07:46.682 Module: software 00:07:46.682 Queue depth: 32 00:07:46.682 Allocate depth: 32 00:07:46.682 # threads/core: 1 00:07:46.682 Run time: 1 seconds 00:07:46.682 Verify: Yes 00:07:46.682 00:07:46.682 Running for 1 seconds... 00:07:46.682 00:07:46.682 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:46.682 ------------------------------------------------------------------------------------ 00:07:46.682 0,0 610784/s 2385 MiB/s 0 0 00:07:46.682 ==================================================================================== 00:07:46.682 Total 610784/s 2385 MiB/s 0 0' 00:07:46.682 13:37:48 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:48 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:46.682 13:37:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:46.682 13:37:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.682 13:37:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.682 13:37:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.682 13:37:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.682 13:37:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.682 13:37:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.682 13:37:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.682 13:37:48 -- accel/accel.sh@42 -- # jq -r . 00:07:46.682 [2024-07-11 13:37:48.944610] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:46.682 [2024-07-11 13:37:48.944688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434438 ] 00:07:46.682 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.682 [2024-07-11 13:37:49.000382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.682 [2024-07-11 13:37:49.036020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val= 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val= 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val=0x1 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val= 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val= 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val=compare 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val= 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val=software 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@23 -- # accel_module=software 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val=32 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val=32 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val=1 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val=Yes 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val= 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:46.682 13:37:49 -- accel/accel.sh@21 -- # val= 00:07:46.682 13:37:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # IFS=: 00:07:46.682 13:37:49 -- accel/accel.sh@20 -- # read -r var val 00:07:48.063 13:37:50 -- accel/accel.sh@21 -- # val= 00:07:48.063 13:37:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.063 13:37:50 -- accel/accel.sh@20 -- # IFS=: 00:07:48.063 13:37:50 -- accel/accel.sh@20 -- # read -r var val 00:07:48.063 13:37:50 -- accel/accel.sh@21 -- # val= 00:07:48.063 13:37:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.063 13:37:50 -- accel/accel.sh@20 -- # IFS=: 00:07:48.063 13:37:50 -- accel/accel.sh@20 -- # read -r var val 00:07:48.063 13:37:50 -- accel/accel.sh@21 -- # val= 00:07:48.063 13:37:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.063 13:37:50 -- accel/accel.sh@20 -- # IFS=: 00:07:48.063 13:37:50 -- accel/accel.sh@20 -- # read -r var val 00:07:48.063 13:37:50 -- accel/accel.sh@21 -- # val= 00:07:48.063 13:37:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.063 13:37:50 -- accel/accel.sh@20 -- # IFS=: 00:07:48.063 13:37:50 -- accel/accel.sh@20 -- # read -r var val 00:07:48.063 13:37:50 -- accel/accel.sh@21 -- # val= 00:07:48.063 13:37:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.063 13:37:50 -- accel/accel.sh@20 -- # IFS=: 00:07:48.063 13:37:50 -- accel/accel.sh@20 -- # read -r var val 00:07:48.063 13:37:50 -- accel/accel.sh@21 -- # val= 00:07:48.063 13:37:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.063 13:37:50 -- accel/accel.sh@20 -- # IFS=: 00:07:48.063 13:37:50 -- accel/accel.sh@20 -- # read -r var val 00:07:48.063 13:37:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.063 13:37:50 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:48.063 13:37:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.063 00:07:48.063 real 0m2.577s 00:07:48.063 user 0m2.352s 00:07:48.063 sys 0m0.231s 00:07:48.063 13:37:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.063 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:07:48.063 ************************************ 00:07:48.063 END TEST accel_compare 00:07:48.063 ************************************ 00:07:48.063 13:37:50 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:48.063 13:37:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:48.063 13:37:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.063 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:07:48.063 ************************************ 00:07:48.063 START TEST accel_xor 00:07:48.063 ************************************ 00:07:48.063 13:37:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:48.063 13:37:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.063 13:37:50 -- accel/accel.sh@17 -- # local accel_module 00:07:48.063 13:37:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:48.063 13:37:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:48.063 13:37:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.063 13:37:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.063 13:37:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.063 13:37:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.063 13:37:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.063 13:37:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.063 13:37:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.063 13:37:50 -- accel/accel.sh@42 -- # jq -r . 00:07:48.063 [2024-07-11 13:37:50.269923] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:48.063 [2024-07-11 13:37:50.269999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434681 ] 00:07:48.063 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.063 [2024-07-11 13:37:50.324484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.063 [2024-07-11 13:37:50.361468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.441 13:37:51 -- accel/accel.sh@18 -- # out=' 00:07:49.441 SPDK Configuration: 00:07:49.441 Core mask: 0x1 00:07:49.441 00:07:49.441 Accel Perf Configuration: 00:07:49.441 Workload Type: xor 00:07:49.441 Source buffers: 2 00:07:49.441 Transfer size: 4096 bytes 00:07:49.441 Vector count 1 00:07:49.441 Module: software 00:07:49.441 Queue depth: 32 00:07:49.441 Allocate depth: 32 00:07:49.441 # threads/core: 1 00:07:49.441 Run time: 1 seconds 00:07:49.441 Verify: Yes 00:07:49.441 00:07:49.441 Running for 1 seconds... 00:07:49.441 00:07:49.441 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:49.441 ------------------------------------------------------------------------------------ 00:07:49.441 0,0 490688/s 1916 MiB/s 0 0 00:07:49.441 ==================================================================================== 00:07:49.441 Total 490688/s 1916 MiB/s 0 0' 00:07:49.441 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.441 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.441 13:37:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:49.441 13:37:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:49.441 13:37:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.441 13:37:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.441 13:37:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.441 13:37:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.441 13:37:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.441 13:37:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.441 13:37:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.441 13:37:51 -- accel/accel.sh@42 -- # jq -r . 00:07:49.442 [2024-07-11 13:37:51.553448] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:49.442 [2024-07-11 13:37:51.553508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434921 ] 00:07:49.442 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.442 [2024-07-11 13:37:51.606658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.442 [2024-07-11 13:37:51.642316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val= 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val= 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val=0x1 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val= 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val= 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val=xor 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val=2 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val= 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val=software 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@23 -- # accel_module=software 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val=32 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val=32 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val=1 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val=Yes 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val= 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:49.442 13:37:51 -- accel/accel.sh@21 -- # val= 00:07:49.442 13:37:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # IFS=: 00:07:49.442 13:37:51 -- accel/accel.sh@20 -- # read -r var val 00:07:50.379 13:37:52 -- accel/accel.sh@21 -- # val= 00:07:50.379 13:37:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.379 13:37:52 -- accel/accel.sh@20 -- # IFS=: 00:07:50.380 13:37:52 -- accel/accel.sh@20 -- # read -r var val 00:07:50.380 13:37:52 -- accel/accel.sh@21 -- # val= 00:07:50.380 13:37:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.380 13:37:52 -- accel/accel.sh@20 -- # IFS=: 00:07:50.380 13:37:52 -- accel/accel.sh@20 -- # read -r var val 00:07:50.380 13:37:52 -- accel/accel.sh@21 -- # val= 00:07:50.380 13:37:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.380 13:37:52 -- accel/accel.sh@20 -- # IFS=: 00:07:50.380 13:37:52 -- accel/accel.sh@20 -- # read -r var val 00:07:50.380 13:37:52 -- accel/accel.sh@21 -- # val= 00:07:50.380 13:37:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.380 13:37:52 -- accel/accel.sh@20 -- # IFS=: 00:07:50.380 13:37:52 -- accel/accel.sh@20 -- # read -r var val 00:07:50.380 13:37:52 -- accel/accel.sh@21 -- # val= 00:07:50.380 13:37:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.380 13:37:52 -- accel/accel.sh@20 -- # IFS=: 00:07:50.380 13:37:52 -- accel/accel.sh@20 -- # read -r var val 00:07:50.380 13:37:52 -- accel/accel.sh@21 -- # val= 00:07:50.380 13:37:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.380 13:37:52 -- accel/accel.sh@20 -- # IFS=: 00:07:50.380 13:37:52 -- accel/accel.sh@20 -- # read -r var val 00:07:50.380 13:37:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:50.380 13:37:52 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:50.380 13:37:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.380 00:07:50.380 real 0m2.571s 00:07:50.380 user 0m2.358s 00:07:50.380 sys 0m0.220s 00:07:50.380 13:37:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.380 13:37:52 -- common/autotest_common.sh@10 -- # set +x 00:07:50.380 ************************************ 00:07:50.380 END TEST accel_xor 00:07:50.380 ************************************ 00:07:50.638 13:37:52 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:50.638 13:37:52 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:50.638 13:37:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.638 13:37:52 -- common/autotest_common.sh@10 -- # set +x 00:07:50.638 ************************************ 00:07:50.638 START TEST accel_xor 00:07:50.638 ************************************ 00:07:50.638 13:37:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:50.638 13:37:52 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.638 13:37:52 -- accel/accel.sh@17 -- # local accel_module 00:07:50.638 13:37:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:50.638 13:37:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:50.638 13:37:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.638 13:37:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.638 13:37:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.638 13:37:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.638 13:37:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.638 13:37:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.638 13:37:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.638 13:37:52 -- accel/accel.sh@42 -- # jq -r . 00:07:50.638 [2024-07-11 13:37:52.878840] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:50.638 [2024-07-11 13:37:52.878905] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435168 ] 00:07:50.638 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.638 [2024-07-11 13:37:52.932433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.638 [2024-07-11 13:37:52.968861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.023 13:37:54 -- accel/accel.sh@18 -- # out=' 00:07:52.023 SPDK Configuration: 00:07:52.023 Core mask: 0x1 00:07:52.023 00:07:52.023 Accel Perf Configuration: 00:07:52.023 Workload Type: xor 00:07:52.023 Source buffers: 3 00:07:52.023 Transfer size: 4096 bytes 00:07:52.023 Vector count 1 00:07:52.023 Module: software 00:07:52.023 Queue depth: 32 00:07:52.023 Allocate depth: 32 00:07:52.023 # threads/core: 1 00:07:52.023 Run time: 1 seconds 00:07:52.023 Verify: Yes 00:07:52.023 00:07:52.023 Running for 1 seconds... 00:07:52.023 00:07:52.023 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:52.023 ------------------------------------------------------------------------------------ 00:07:52.023 0,0 445728/s 1741 MiB/s 0 0 00:07:52.023 ==================================================================================== 00:07:52.023 Total 445728/s 1741 MiB/s 0 0' 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:52.023 13:37:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:52.023 13:37:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.023 13:37:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.023 13:37:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.023 13:37:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.023 13:37:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.023 13:37:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.023 13:37:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.023 13:37:54 -- accel/accel.sh@42 -- # jq -r . 00:07:52.023 [2024-07-11 13:37:54.161587] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:52.023 [2024-07-11 13:37:54.161676] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435405 ] 00:07:52.023 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.023 [2024-07-11 13:37:54.214487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.023 [2024-07-11 13:37:54.250255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val= 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val= 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val=0x1 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val= 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val= 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val=xor 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val=3 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val= 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val=software 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@23 -- # accel_module=software 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val=32 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val=32 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val=1 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.023 13:37:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:52.023 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.023 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.024 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.024 13:37:54 -- accel/accel.sh@21 -- # val=Yes 00:07:52.024 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.024 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.024 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.024 13:37:54 -- accel/accel.sh@21 -- # val= 00:07:52.024 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.024 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.024 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:52.024 13:37:54 -- accel/accel.sh@21 -- # val= 00:07:52.024 13:37:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.024 13:37:54 -- accel/accel.sh@20 -- # IFS=: 00:07:52.024 13:37:54 -- accel/accel.sh@20 -- # read -r var val 00:07:53.399 13:37:55 -- accel/accel.sh@21 -- # val= 00:07:53.399 13:37:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.399 13:37:55 -- accel/accel.sh@20 -- # IFS=: 00:07:53.399 13:37:55 -- accel/accel.sh@20 -- # read -r var val 00:07:53.399 13:37:55 -- accel/accel.sh@21 -- # val= 00:07:53.399 13:37:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.399 13:37:55 -- accel/accel.sh@20 -- # IFS=: 00:07:53.399 13:37:55 -- accel/accel.sh@20 -- # read -r var val 00:07:53.399 13:37:55 -- accel/accel.sh@21 -- # val= 00:07:53.399 13:37:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.399 13:37:55 -- accel/accel.sh@20 -- # IFS=: 00:07:53.399 13:37:55 -- accel/accel.sh@20 -- # read -r var val 00:07:53.399 13:37:55 -- accel/accel.sh@21 -- # val= 00:07:53.399 13:37:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.399 13:37:55 -- accel/accel.sh@20 -- # IFS=: 00:07:53.399 13:37:55 -- accel/accel.sh@20 -- # read -r var val 00:07:53.399 13:37:55 -- accel/accel.sh@21 -- # val= 00:07:53.399 13:37:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.399 13:37:55 -- accel/accel.sh@20 -- # IFS=: 00:07:53.399 13:37:55 -- accel/accel.sh@20 -- # read -r var val 00:07:53.399 13:37:55 -- accel/accel.sh@21 -- # val= 00:07:53.399 13:37:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.399 13:37:55 -- accel/accel.sh@20 -- # IFS=: 00:07:53.399 13:37:55 -- accel/accel.sh@20 -- # read -r var val 00:07:53.399 13:37:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:53.400 13:37:55 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:53.400 13:37:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.400 00:07:53.400 real 0m2.569s 00:07:53.400 user 0m2.356s 00:07:53.400 sys 0m0.220s 00:07:53.400 13:37:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.400 13:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:53.400 ************************************ 00:07:53.400 END TEST accel_xor 00:07:53.400 ************************************ 00:07:53.400 13:37:55 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:53.400 13:37:55 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:53.400 13:37:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.400 13:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:53.400 ************************************ 00:07:53.400 START TEST accel_dif_verify 00:07:53.400 ************************************ 00:07:53.400 13:37:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:53.400 13:37:55 -- accel/accel.sh@16 -- # local accel_opc 00:07:53.400 13:37:55 -- accel/accel.sh@17 -- # local accel_module 00:07:53.400 13:37:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:53.400 13:37:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:53.400 13:37:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.400 13:37:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.400 13:37:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.400 13:37:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.400 13:37:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.400 13:37:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.400 13:37:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.400 13:37:55 -- accel/accel.sh@42 -- # jq -r . 00:07:53.400 [2024-07-11 13:37:55.486102] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:53.400 [2024-07-11 13:37:55.486169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435652 ] 00:07:53.400 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.400 [2024-07-11 13:37:55.538799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.400 [2024-07-11 13:37:55.575286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.338 13:37:56 -- accel/accel.sh@18 -- # out=' 00:07:54.338 SPDK Configuration: 00:07:54.338 Core mask: 0x1 00:07:54.338 00:07:54.338 Accel Perf Configuration: 00:07:54.338 Workload Type: dif_verify 00:07:54.338 Vector size: 4096 bytes 00:07:54.338 Transfer size: 4096 bytes 00:07:54.338 Block size: 512 bytes 00:07:54.338 Metadata size: 8 bytes 00:07:54.338 Vector count 1 00:07:54.338 Module: software 00:07:54.338 Queue depth: 32 00:07:54.338 Allocate depth: 32 00:07:54.338 # threads/core: 1 00:07:54.338 Run time: 1 seconds 00:07:54.338 Verify: No 00:07:54.338 00:07:54.338 Running for 1 seconds... 00:07:54.338 00:07:54.338 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:54.338 ------------------------------------------------------------------------------------ 00:07:54.338 0,0 131936/s 523 MiB/s 0 0 00:07:54.338 ==================================================================================== 00:07:54.338 Total 131936/s 515 MiB/s 0 0' 00:07:54.338 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.338 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.338 13:37:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:54.338 13:37:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:54.338 13:37:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.338 13:37:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.338 13:37:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.338 13:37:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.338 13:37:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.338 13:37:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.338 13:37:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.338 13:37:56 -- accel/accel.sh@42 -- # jq -r . 00:07:54.338 [2024-07-11 13:37:56.767628] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:54.338 [2024-07-11 13:37:56.767691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435892 ] 00:07:54.338 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.598 [2024-07-11 13:37:56.821127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.598 [2024-07-11 13:37:56.856534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val= 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val= 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val=0x1 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val= 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val= 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val=dif_verify 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val= 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val=software 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val=32 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val=32 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val=1 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val=No 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val= 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:54.598 13:37:56 -- accel/accel.sh@21 -- # val= 00:07:54.598 13:37:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # IFS=: 00:07:54.598 13:37:56 -- accel/accel.sh@20 -- # read -r var val 00:07:55.977 13:37:58 -- accel/accel.sh@21 -- # val= 00:07:55.977 13:37:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.977 13:37:58 -- accel/accel.sh@20 -- # IFS=: 00:07:55.977 13:37:58 -- accel/accel.sh@20 -- # read -r var val 00:07:55.977 13:37:58 -- accel/accel.sh@21 -- # val= 00:07:55.977 13:37:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.977 13:37:58 -- accel/accel.sh@20 -- # IFS=: 00:07:55.977 13:37:58 -- accel/accel.sh@20 -- # read -r var val 00:07:55.977 13:37:58 -- accel/accel.sh@21 -- # val= 00:07:55.977 13:37:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.977 13:37:58 -- accel/accel.sh@20 -- # IFS=: 00:07:55.977 13:37:58 -- accel/accel.sh@20 -- # read -r var val 00:07:55.977 13:37:58 -- accel/accel.sh@21 -- # val= 00:07:55.977 13:37:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.977 13:37:58 -- accel/accel.sh@20 -- # IFS=: 00:07:55.977 13:37:58 -- accel/accel.sh@20 -- # read -r var val 00:07:55.977 13:37:58 -- accel/accel.sh@21 -- # val= 00:07:55.977 13:37:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.977 13:37:58 -- accel/accel.sh@20 -- # IFS=: 00:07:55.977 13:37:58 -- accel/accel.sh@20 -- # read -r var val 00:07:55.977 13:37:58 -- accel/accel.sh@21 -- # val= 00:07:55.977 13:37:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.977 13:37:58 -- accel/accel.sh@20 -- # IFS=: 00:07:55.977 13:37:58 -- accel/accel.sh@20 -- # read -r var val 00:07:55.977 13:37:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:55.977 13:37:58 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:55.977 13:37:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.977 00:07:55.977 real 0m2.567s 00:07:55.977 user 0m2.363s 00:07:55.977 sys 0m0.212s 00:07:55.977 13:37:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.977 13:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:55.977 ************************************ 00:07:55.977 END TEST accel_dif_verify 00:07:55.977 ************************************ 00:07:55.977 13:37:58 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:55.977 13:37:58 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:55.977 13:37:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.977 13:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:55.977 ************************************ 00:07:55.977 START TEST accel_dif_generate 00:07:55.977 ************************************ 00:07:55.977 13:37:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:55.977 13:37:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:55.977 13:37:58 -- accel/accel.sh@17 -- # local accel_module 00:07:55.977 13:37:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:55.977 13:37:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:55.977 13:37:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.977 13:37:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.977 13:37:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.977 13:37:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.977 13:37:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.978 13:37:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.978 13:37:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.978 13:37:58 -- accel/accel.sh@42 -- # jq -r . 00:07:55.978 [2024-07-11 13:37:58.091527] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:55.978 [2024-07-11 13:37:58.091585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436143 ] 00:07:55.978 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.978 [2024-07-11 13:37:58.144449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.978 [2024-07-11 13:37:58.180375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.914 13:37:59 -- accel/accel.sh@18 -- # out=' 00:07:56.914 SPDK Configuration: 00:07:56.914 Core mask: 0x1 00:07:56.914 00:07:56.914 Accel Perf Configuration: 00:07:56.914 Workload Type: dif_generate 00:07:56.914 Vector size: 4096 bytes 00:07:56.914 Transfer size: 4096 bytes 00:07:56.914 Block size: 512 bytes 00:07:56.914 Metadata size: 8 bytes 00:07:56.914 Vector count 1 00:07:56.914 Module: software 00:07:56.914 Queue depth: 32 00:07:56.914 Allocate depth: 32 00:07:56.914 # threads/core: 1 00:07:56.914 Run time: 1 seconds 00:07:56.914 Verify: No 00:07:56.914 00:07:56.914 Running for 1 seconds... 00:07:56.914 00:07:56.914 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:56.914 ------------------------------------------------------------------------------------ 00:07:56.914 0,0 160480/s 636 MiB/s 0 0 00:07:56.914 ==================================================================================== 00:07:56.914 Total 160480/s 626 MiB/s 0 0' 00:07:56.914 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:56.914 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:56.914 13:37:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:56.914 13:37:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:56.914 13:37:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.914 13:37:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.914 13:37:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.914 13:37:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.914 13:37:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.914 13:37:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.914 13:37:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.914 13:37:59 -- accel/accel.sh@42 -- # jq -r . 00:07:57.184 [2024-07-11 13:37:59.371597] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:57.184 [2024-07-11 13:37:59.371657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436377 ] 00:07:57.184 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.184 [2024-07-11 13:37:59.424749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.184 [2024-07-11 13:37:59.459938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.184 13:37:59 -- accel/accel.sh@21 -- # val= 00:07:57.184 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.184 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.184 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.184 13:37:59 -- accel/accel.sh@21 -- # val= 00:07:57.184 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.184 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.184 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.184 13:37:59 -- accel/accel.sh@21 -- # val=0x1 00:07:57.184 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.184 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.184 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.184 13:37:59 -- accel/accel.sh@21 -- # val= 00:07:57.184 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.184 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.184 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.184 13:37:59 -- accel/accel.sh@21 -- # val= 00:07:57.184 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.184 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.184 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.184 13:37:59 -- accel/accel.sh@21 -- # val=dif_generate 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val= 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val=software 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val=32 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val=32 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val=1 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val=No 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val= 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:57.185 13:37:59 -- accel/accel.sh@21 -- # val= 00:07:57.185 13:37:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # IFS=: 00:07:57.185 13:37:59 -- accel/accel.sh@20 -- # read -r var val 00:07:58.572 13:38:00 -- accel/accel.sh@21 -- # val= 00:07:58.572 13:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.572 13:38:00 -- accel/accel.sh@20 -- # IFS=: 00:07:58.572 13:38:00 -- accel/accel.sh@20 -- # read -r var val 00:07:58.572 13:38:00 -- accel/accel.sh@21 -- # val= 00:07:58.572 13:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.572 13:38:00 -- accel/accel.sh@20 -- # IFS=: 00:07:58.572 13:38:00 -- accel/accel.sh@20 -- # read -r var val 00:07:58.572 13:38:00 -- accel/accel.sh@21 -- # val= 00:07:58.572 13:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.572 13:38:00 -- accel/accel.sh@20 -- # IFS=: 00:07:58.572 13:38:00 -- accel/accel.sh@20 -- # read -r var val 00:07:58.572 13:38:00 -- accel/accel.sh@21 -- # val= 00:07:58.572 13:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.572 13:38:00 -- accel/accel.sh@20 -- # IFS=: 00:07:58.572 13:38:00 -- accel/accel.sh@20 -- # read -r var val 00:07:58.572 13:38:00 -- accel/accel.sh@21 -- # val= 00:07:58.572 13:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.572 13:38:00 -- accel/accel.sh@20 -- # IFS=: 00:07:58.572 13:38:00 -- accel/accel.sh@20 -- # read -r var val 00:07:58.572 13:38:00 -- accel/accel.sh@21 -- # val= 00:07:58.572 13:38:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.572 13:38:00 -- accel/accel.sh@20 -- # IFS=: 00:07:58.572 13:38:00 -- accel/accel.sh@20 -- # read -r var val 00:07:58.572 13:38:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:58.572 13:38:00 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:58.572 13:38:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.572 00:07:58.572 real 0m2.565s 00:07:58.572 user 0m2.352s 00:07:58.572 sys 0m0.222s 00:07:58.572 13:38:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.572 13:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:58.572 ************************************ 00:07:58.572 END TEST accel_dif_generate 00:07:58.572 ************************************ 00:07:58.572 13:38:00 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:58.572 13:38:00 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:58.572 13:38:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.572 13:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:58.572 ************************************ 00:07:58.572 START TEST accel_dif_generate_copy 00:07:58.572 ************************************ 00:07:58.572 13:38:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:58.572 13:38:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:58.572 13:38:00 -- accel/accel.sh@17 -- # local accel_module 00:07:58.572 13:38:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:58.572 13:38:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:58.572 13:38:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.572 13:38:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.572 13:38:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.572 13:38:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.572 13:38:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.572 13:38:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.572 13:38:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.572 13:38:00 -- accel/accel.sh@42 -- # jq -r . 00:07:58.572 [2024-07-11 13:38:00.695816] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:58.572 [2024-07-11 13:38:00.695874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436633 ] 00:07:58.572 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.572 [2024-07-11 13:38:00.748855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.572 [2024-07-11 13:38:00.785237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.510 13:38:01 -- accel/accel.sh@18 -- # out=' 00:07:59.510 SPDK Configuration: 00:07:59.510 Core mask: 0x1 00:07:59.510 00:07:59.510 Accel Perf Configuration: 00:07:59.510 Workload Type: dif_generate_copy 00:07:59.510 Vector size: 4096 bytes 00:07:59.510 Transfer size: 4096 bytes 00:07:59.510 Vector count 1 00:07:59.510 Module: software 00:07:59.510 Queue depth: 32 00:07:59.510 Allocate depth: 32 00:07:59.510 # threads/core: 1 00:07:59.510 Run time: 1 seconds 00:07:59.510 Verify: No 00:07:59.510 00:07:59.510 Running for 1 seconds... 00:07:59.510 00:07:59.510 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:59.510 ------------------------------------------------------------------------------------ 00:07:59.510 0,0 122240/s 484 MiB/s 0 0 00:07:59.510 ==================================================================================== 00:07:59.510 Total 122240/s 477 MiB/s 0 0' 00:07:59.510 13:38:01 -- accel/accel.sh@20 -- # IFS=: 00:07:59.510 13:38:01 -- accel/accel.sh@20 -- # read -r var val 00:07:59.510 13:38:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:59.510 13:38:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:59.510 13:38:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.510 13:38:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.510 13:38:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.510 13:38:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.510 13:38:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.510 13:38:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.510 13:38:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.510 13:38:01 -- accel/accel.sh@42 -- # jq -r . 00:07:59.769 [2024-07-11 13:38:01.976449] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:59.769 [2024-07-11 13:38:01.976533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436865 ] 00:07:59.769 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.769 [2024-07-11 13:38:02.030778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.769 [2024-07-11 13:38:02.066670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.769 13:38:02 -- accel/accel.sh@21 -- # val= 00:07:59.769 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.769 13:38:02 -- accel/accel.sh@21 -- # val= 00:07:59.769 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.769 13:38:02 -- accel/accel.sh@21 -- # val=0x1 00:07:59.769 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.769 13:38:02 -- accel/accel.sh@21 -- # val= 00:07:59.769 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.769 13:38:02 -- accel/accel.sh@21 -- # val= 00:07:59.769 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.769 13:38:02 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:59.769 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.769 13:38:02 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.769 13:38:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:59.769 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.769 13:38:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:59.769 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.769 13:38:02 -- accel/accel.sh@21 -- # val= 00:07:59.769 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.769 13:38:02 -- accel/accel.sh@21 -- # val=software 00:07:59.769 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.769 13:38:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.769 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.769 13:38:02 -- accel/accel.sh@21 -- # val=32 00:07:59.769 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.770 13:38:02 -- accel/accel.sh@21 -- # val=32 00:07:59.770 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.770 13:38:02 -- accel/accel.sh@21 -- # val=1 00:07:59.770 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.770 13:38:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:59.770 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.770 13:38:02 -- accel/accel.sh@21 -- # val=No 00:07:59.770 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.770 13:38:02 -- accel/accel.sh@21 -- # val= 00:07:59.770 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:07:59.770 13:38:02 -- accel/accel.sh@21 -- # val= 00:07:59.770 13:38:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # IFS=: 00:07:59.770 13:38:02 -- accel/accel.sh@20 -- # read -r var val 00:08:01.148 13:38:03 -- accel/accel.sh@21 -- # val= 00:08:01.148 13:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.148 13:38:03 -- accel/accel.sh@20 -- # IFS=: 00:08:01.148 13:38:03 -- accel/accel.sh@20 -- # read -r var val 00:08:01.148 13:38:03 -- accel/accel.sh@21 -- # val= 00:08:01.148 13:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.148 13:38:03 -- accel/accel.sh@20 -- # IFS=: 00:08:01.148 13:38:03 -- accel/accel.sh@20 -- # read -r var val 00:08:01.148 13:38:03 -- accel/accel.sh@21 -- # val= 00:08:01.148 13:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.148 13:38:03 -- accel/accel.sh@20 -- # IFS=: 00:08:01.148 13:38:03 -- accel/accel.sh@20 -- # read -r var val 00:08:01.148 13:38:03 -- accel/accel.sh@21 -- # val= 00:08:01.148 13:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.148 13:38:03 -- accel/accel.sh@20 -- # IFS=: 00:08:01.148 13:38:03 -- accel/accel.sh@20 -- # read -r var val 00:08:01.148 13:38:03 -- accel/accel.sh@21 -- # val= 00:08:01.148 13:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.148 13:38:03 -- accel/accel.sh@20 -- # IFS=: 00:08:01.148 13:38:03 -- accel/accel.sh@20 -- # read -r var val 00:08:01.148 13:38:03 -- accel/accel.sh@21 -- # val= 00:08:01.148 13:38:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.148 13:38:03 -- accel/accel.sh@20 -- # IFS=: 00:08:01.148 13:38:03 -- accel/accel.sh@20 -- # read -r var val 00:08:01.148 13:38:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:01.148 13:38:03 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:08:01.148 13:38:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.148 00:08:01.148 real 0m2.566s 00:08:01.148 user 0m2.367s 00:08:01.148 sys 0m0.207s 00:08:01.148 13:38:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.148 13:38:03 -- common/autotest_common.sh@10 -- # set +x 00:08:01.148 ************************************ 00:08:01.148 END TEST accel_dif_generate_copy 00:08:01.148 ************************************ 00:08:01.148 13:38:03 -- accel/accel.sh@107 -- # [[ y == y ]] 00:08:01.148 13:38:03 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:01.148 13:38:03 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:01.148 13:38:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.148 13:38:03 -- common/autotest_common.sh@10 -- # set +x 00:08:01.148 ************************************ 00:08:01.148 START TEST accel_comp 00:08:01.148 ************************************ 00:08:01.148 13:38:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:01.148 13:38:03 -- accel/accel.sh@16 -- # local accel_opc 00:08:01.148 13:38:03 -- accel/accel.sh@17 -- # local accel_module 00:08:01.148 13:38:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:01.148 13:38:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:01.148 13:38:03 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.148 13:38:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.148 13:38:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.148 13:38:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.148 13:38:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.148 13:38:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.148 13:38:03 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.148 13:38:03 -- accel/accel.sh@42 -- # jq -r . 00:08:01.148 [2024-07-11 13:38:03.302178] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:01.148 [2024-07-11 13:38:03.302256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437115 ] 00:08:01.148 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.148 [2024-07-11 13:38:03.356877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.148 [2024-07-11 13:38:03.393630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.526 13:38:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:02.526 00:08:02.526 SPDK Configuration: 00:08:02.526 Core mask: 0x1 00:08:02.526 00:08:02.526 Accel Perf Configuration: 00:08:02.526 Workload Type: compress 00:08:02.526 Transfer size: 4096 bytes 00:08:02.526 Vector count 1 00:08:02.526 Module: software 00:08:02.526 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:02.526 Queue depth: 32 00:08:02.526 Allocate depth: 32 00:08:02.526 # threads/core: 1 00:08:02.526 Run time: 1 seconds 00:08:02.526 Verify: No 00:08:02.526 00:08:02.526 Running for 1 seconds... 00:08:02.526 00:08:02.526 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:02.526 ------------------------------------------------------------------------------------ 00:08:02.526 0,0 61472/s 256 MiB/s 0 0 00:08:02.526 ==================================================================================== 00:08:02.526 Total 61472/s 240 MiB/s 0 0' 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 13:38:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:02.526 13:38:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:02.526 13:38:04 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.526 13:38:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.526 13:38:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.526 13:38:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.526 13:38:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.526 13:38:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.526 13:38:04 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.526 13:38:04 -- accel/accel.sh@42 -- # jq -r . 00:08:02.526 [2024-07-11 13:38:04.588470] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:02.526 [2024-07-11 13:38:04.588530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437347 ] 00:08:02.526 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.526 [2024-07-11 13:38:04.641932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.526 [2024-07-11 13:38:04.678794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.526 13:38:04 -- accel/accel.sh@21 -- # val= 00:08:02.526 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 13:38:04 -- accel/accel.sh@21 -- # val= 00:08:02.526 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 13:38:04 -- accel/accel.sh@21 -- # val= 00:08:02.526 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 13:38:04 -- accel/accel.sh@21 -- # val=0x1 00:08:02.526 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 13:38:04 -- accel/accel.sh@21 -- # val= 00:08:02.526 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 13:38:04 -- accel/accel.sh@21 -- # val= 00:08:02.526 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 13:38:04 -- accel/accel.sh@21 -- # val=compress 00:08:02.526 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 13:38:04 -- accel/accel.sh@24 -- # accel_opc=compress 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 13:38:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:02.526 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 13:38:04 -- accel/accel.sh@21 -- # val= 00:08:02.526 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.526 13:38:04 -- accel/accel.sh@21 -- # val=software 00:08:02.526 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.526 13:38:04 -- accel/accel.sh@23 -- # accel_module=software 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.526 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.527 13:38:04 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:02.527 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.527 13:38:04 -- accel/accel.sh@21 -- # val=32 00:08:02.527 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.527 13:38:04 -- accel/accel.sh@21 -- # val=32 00:08:02.527 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.527 13:38:04 -- accel/accel.sh@21 -- # val=1 00:08:02.527 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.527 13:38:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:02.527 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.527 13:38:04 -- accel/accel.sh@21 -- # val=No 00:08:02.527 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.527 13:38:04 -- accel/accel.sh@21 -- # val= 00:08:02.527 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:02.527 13:38:04 -- accel/accel.sh@21 -- # val= 00:08:02.527 13:38:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # IFS=: 00:08:02.527 13:38:04 -- accel/accel.sh@20 -- # read -r var val 00:08:03.549 13:38:05 -- accel/accel.sh@21 -- # val= 00:08:03.550 13:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.550 13:38:05 -- accel/accel.sh@20 -- # IFS=: 00:08:03.550 13:38:05 -- accel/accel.sh@20 -- # read -r var val 00:08:03.550 13:38:05 -- accel/accel.sh@21 -- # val= 00:08:03.550 13:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.550 13:38:05 -- accel/accel.sh@20 -- # IFS=: 00:08:03.550 13:38:05 -- accel/accel.sh@20 -- # read -r var val 00:08:03.550 13:38:05 -- accel/accel.sh@21 -- # val= 00:08:03.550 13:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.550 13:38:05 -- accel/accel.sh@20 -- # IFS=: 00:08:03.550 13:38:05 -- accel/accel.sh@20 -- # read -r var val 00:08:03.550 13:38:05 -- accel/accel.sh@21 -- # val= 00:08:03.550 13:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.550 13:38:05 -- accel/accel.sh@20 -- # IFS=: 00:08:03.550 13:38:05 -- accel/accel.sh@20 -- # read -r var val 00:08:03.550 13:38:05 -- accel/accel.sh@21 -- # val= 00:08:03.550 13:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.550 13:38:05 -- accel/accel.sh@20 -- # IFS=: 00:08:03.550 13:38:05 -- accel/accel.sh@20 -- # read -r var val 00:08:03.550 13:38:05 -- accel/accel.sh@21 -- # val= 00:08:03.550 13:38:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.550 13:38:05 -- accel/accel.sh@20 -- # IFS=: 00:08:03.550 13:38:05 -- accel/accel.sh@20 -- # read -r var val 00:08:03.550 13:38:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:03.550 13:38:05 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:08:03.550 13:38:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.550 00:08:03.550 real 0m2.579s 00:08:03.550 user 0m2.367s 00:08:03.550 sys 0m0.221s 00:08:03.550 13:38:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.550 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:08:03.550 ************************************ 00:08:03.550 END TEST accel_comp 00:08:03.550 ************************************ 00:08:03.550 13:38:05 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:03.550 13:38:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:03.550 13:38:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.550 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:08:03.550 ************************************ 00:08:03.550 START TEST accel_decomp 00:08:03.550 ************************************ 00:08:03.550 13:38:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:03.550 13:38:05 -- accel/accel.sh@16 -- # local accel_opc 00:08:03.550 13:38:05 -- accel/accel.sh@17 -- # local accel_module 00:08:03.550 13:38:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:03.550 13:38:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:03.550 13:38:05 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.550 13:38:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:03.550 13:38:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.550 13:38:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.550 13:38:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:03.550 13:38:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:03.550 13:38:05 -- accel/accel.sh@41 -- # local IFS=, 00:08:03.550 13:38:05 -- accel/accel.sh@42 -- # jq -r . 00:08:03.550 [2024-07-11 13:38:05.920170] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:03.550 [2024-07-11 13:38:05.920248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437607 ] 00:08:03.550 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.550 [2024-07-11 13:38:05.974407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.808 [2024-07-11 13:38:06.011387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.742 13:38:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:04.742 00:08:04.742 SPDK Configuration: 00:08:04.742 Core mask: 0x1 00:08:04.742 00:08:04.742 Accel Perf Configuration: 00:08:04.742 Workload Type: decompress 00:08:04.742 Transfer size: 4096 bytes 00:08:04.742 Vector count 1 00:08:04.742 Module: software 00:08:04.742 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:04.742 Queue depth: 32 00:08:04.742 Allocate depth: 32 00:08:04.742 # threads/core: 1 00:08:04.742 Run time: 1 seconds 00:08:04.742 Verify: Yes 00:08:04.742 00:08:04.742 Running for 1 seconds... 00:08:04.742 00:08:04.742 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:04.742 ------------------------------------------------------------------------------------ 00:08:04.742 0,0 74112/s 136 MiB/s 0 0 00:08:04.742 ==================================================================================== 00:08:04.742 Total 74112/s 289 MiB/s 0 0' 00:08:04.742 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:04.742 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:04.742 13:38:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:04.742 13:38:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:04.742 13:38:07 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.742 13:38:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.742 13:38:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.742 13:38:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.742 13:38:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.742 13:38:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.742 13:38:07 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.742 13:38:07 -- accel/accel.sh@42 -- # jq -r . 00:08:05.002 [2024-07-11 13:38:07.204944] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:05.002 [2024-07-11 13:38:07.205003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1437839 ] 00:08:05.002 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.002 [2024-07-11 13:38:07.257972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.002 [2024-07-11 13:38:07.294056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val= 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val= 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val= 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val=0x1 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val= 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val= 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val=decompress 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val= 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val=software 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@23 -- # accel_module=software 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val=32 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val=32 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val=1 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val=Yes 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val= 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:05.002 13:38:07 -- accel/accel.sh@21 -- # val= 00:08:05.002 13:38:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # IFS=: 00:08:05.002 13:38:07 -- accel/accel.sh@20 -- # read -r var val 00:08:06.380 13:38:08 -- accel/accel.sh@21 -- # val= 00:08:06.380 13:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.380 13:38:08 -- accel/accel.sh@20 -- # IFS=: 00:08:06.380 13:38:08 -- accel/accel.sh@20 -- # read -r var val 00:08:06.380 13:38:08 -- accel/accel.sh@21 -- # val= 00:08:06.380 13:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.380 13:38:08 -- accel/accel.sh@20 -- # IFS=: 00:08:06.380 13:38:08 -- accel/accel.sh@20 -- # read -r var val 00:08:06.380 13:38:08 -- accel/accel.sh@21 -- # val= 00:08:06.380 13:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.380 13:38:08 -- accel/accel.sh@20 -- # IFS=: 00:08:06.380 13:38:08 -- accel/accel.sh@20 -- # read -r var val 00:08:06.380 13:38:08 -- accel/accel.sh@21 -- # val= 00:08:06.380 13:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.380 13:38:08 -- accel/accel.sh@20 -- # IFS=: 00:08:06.380 13:38:08 -- accel/accel.sh@20 -- # read -r var val 00:08:06.380 13:38:08 -- accel/accel.sh@21 -- # val= 00:08:06.380 13:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.380 13:38:08 -- accel/accel.sh@20 -- # IFS=: 00:08:06.380 13:38:08 -- accel/accel.sh@20 -- # read -r var val 00:08:06.380 13:38:08 -- accel/accel.sh@21 -- # val= 00:08:06.380 13:38:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.380 13:38:08 -- accel/accel.sh@20 -- # IFS=: 00:08:06.380 13:38:08 -- accel/accel.sh@20 -- # read -r var val 00:08:06.380 13:38:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:06.380 13:38:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:06.380 13:38:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.380 00:08:06.380 real 0m2.576s 00:08:06.380 user 0m2.369s 00:08:06.380 sys 0m0.217s 00:08:06.380 13:38:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.380 13:38:08 -- common/autotest_common.sh@10 -- # set +x 00:08:06.380 ************************************ 00:08:06.380 END TEST accel_decomp 00:08:06.380 ************************************ 00:08:06.380 13:38:08 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:06.380 13:38:08 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:06.380 13:38:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.380 13:38:08 -- common/autotest_common.sh@10 -- # set +x 00:08:06.380 ************************************ 00:08:06.380 START TEST accel_decmop_full 00:08:06.380 ************************************ 00:08:06.380 13:38:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:06.380 13:38:08 -- accel/accel.sh@16 -- # local accel_opc 00:08:06.380 13:38:08 -- accel/accel.sh@17 -- # local accel_module 00:08:06.380 13:38:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:06.380 13:38:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:06.380 13:38:08 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.380 13:38:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.380 13:38:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.380 13:38:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.380 13:38:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.380 13:38:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.380 13:38:08 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.380 13:38:08 -- accel/accel.sh@42 -- # jq -r . 00:08:06.380 [2024-07-11 13:38:08.535649] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:06.381 [2024-07-11 13:38:08.535726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438088 ] 00:08:06.381 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.381 [2024-07-11 13:38:08.590761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.381 [2024-07-11 13:38:08.627429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.765 13:38:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:07.765 00:08:07.765 SPDK Configuration: 00:08:07.765 Core mask: 0x1 00:08:07.765 00:08:07.765 Accel Perf Configuration: 00:08:07.765 Workload Type: decompress 00:08:07.765 Transfer size: 111250 bytes 00:08:07.765 Vector count 1 00:08:07.765 Module: software 00:08:07.765 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:07.765 Queue depth: 32 00:08:07.765 Allocate depth: 32 00:08:07.765 # threads/core: 1 00:08:07.765 Run time: 1 seconds 00:08:07.765 Verify: Yes 00:08:07.765 00:08:07.765 Running for 1 seconds... 00:08:07.765 00:08:07.765 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:07.765 ------------------------------------------------------------------------------------ 00:08:07.765 0,0 4928/s 203 MiB/s 0 0 00:08:07.765 ==================================================================================== 00:08:07.765 Total 4928/s 522 MiB/s 0 0' 00:08:07.765 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.765 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.765 13:38:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:07.765 13:38:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:07.765 13:38:09 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.765 13:38:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.765 13:38:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.765 13:38:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.766 13:38:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.766 13:38:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.766 13:38:09 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.766 13:38:09 -- accel/accel.sh@42 -- # jq -r . 00:08:07.766 [2024-07-11 13:38:09.831748] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:07.766 [2024-07-11 13:38:09.831826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438320 ] 00:08:07.766 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.766 [2024-07-11 13:38:09.885919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.766 [2024-07-11 13:38:09.921568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val= 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val= 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val= 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val=0x1 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val= 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val= 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val=decompress 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val= 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val=software 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@23 -- # accel_module=software 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val=32 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val=32 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val=1 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val=Yes 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val= 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:07.766 13:38:09 -- accel/accel.sh@21 -- # val= 00:08:07.766 13:38:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # IFS=: 00:08:07.766 13:38:09 -- accel/accel.sh@20 -- # read -r var val 00:08:08.703 13:38:11 -- accel/accel.sh@21 -- # val= 00:08:08.703 13:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.703 13:38:11 -- accel/accel.sh@20 -- # IFS=: 00:08:08.703 13:38:11 -- accel/accel.sh@20 -- # read -r var val 00:08:08.703 13:38:11 -- accel/accel.sh@21 -- # val= 00:08:08.703 13:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.703 13:38:11 -- accel/accel.sh@20 -- # IFS=: 00:08:08.703 13:38:11 -- accel/accel.sh@20 -- # read -r var val 00:08:08.703 13:38:11 -- accel/accel.sh@21 -- # val= 00:08:08.703 13:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.703 13:38:11 -- accel/accel.sh@20 -- # IFS=: 00:08:08.703 13:38:11 -- accel/accel.sh@20 -- # read -r var val 00:08:08.703 13:38:11 -- accel/accel.sh@21 -- # val= 00:08:08.703 13:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.703 13:38:11 -- accel/accel.sh@20 -- # IFS=: 00:08:08.703 13:38:11 -- accel/accel.sh@20 -- # read -r var val 00:08:08.703 13:38:11 -- accel/accel.sh@21 -- # val= 00:08:08.703 13:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.703 13:38:11 -- accel/accel.sh@20 -- # IFS=: 00:08:08.703 13:38:11 -- accel/accel.sh@20 -- # read -r var val 00:08:08.703 13:38:11 -- accel/accel.sh@21 -- # val= 00:08:08.703 13:38:11 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.703 13:38:11 -- accel/accel.sh@20 -- # IFS=: 00:08:08.703 13:38:11 -- accel/accel.sh@20 -- # read -r var val 00:08:08.703 13:38:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:08.703 13:38:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:08.703 13:38:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.703 00:08:08.703 real 0m2.596s 00:08:08.703 user 0m2.386s 00:08:08.703 sys 0m0.216s 00:08:08.703 13:38:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.703 13:38:11 -- common/autotest_common.sh@10 -- # set +x 00:08:08.703 ************************************ 00:08:08.703 END TEST accel_decmop_full 00:08:08.703 ************************************ 00:08:08.703 13:38:11 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:08.703 13:38:11 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:08.703 13:38:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.703 13:38:11 -- common/autotest_common.sh@10 -- # set +x 00:08:08.703 ************************************ 00:08:08.703 START TEST accel_decomp_mcore 00:08:08.703 ************************************ 00:08:08.703 13:38:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:08.703 13:38:11 -- accel/accel.sh@16 -- # local accel_opc 00:08:08.703 13:38:11 -- accel/accel.sh@17 -- # local accel_module 00:08:08.703 13:38:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:08.703 13:38:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:08.703 13:38:11 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.703 13:38:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.703 13:38:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.703 13:38:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.703 13:38:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.703 13:38:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.703 13:38:11 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.703 13:38:11 -- accel/accel.sh@42 -- # jq -r . 00:08:08.962 [2024-07-11 13:38:11.167413] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:08.962 [2024-07-11 13:38:11.167473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438573 ] 00:08:08.962 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.962 [2024-07-11 13:38:11.222190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.962 [2024-07-11 13:38:11.261400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.962 [2024-07-11 13:38:11.261499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.962 [2024-07-11 13:38:11.261560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.962 [2024-07-11 13:38:11.261562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.341 13:38:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:10.341 00:08:10.341 SPDK Configuration: 00:08:10.341 Core mask: 0xf 00:08:10.341 00:08:10.341 Accel Perf Configuration: 00:08:10.341 Workload Type: decompress 00:08:10.341 Transfer size: 4096 bytes 00:08:10.341 Vector count 1 00:08:10.341 Module: software 00:08:10.341 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:10.341 Queue depth: 32 00:08:10.341 Allocate depth: 32 00:08:10.341 # threads/core: 1 00:08:10.341 Run time: 1 seconds 00:08:10.341 Verify: Yes 00:08:10.341 00:08:10.341 Running for 1 seconds... 00:08:10.341 00:08:10.341 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:10.341 ------------------------------------------------------------------------------------ 00:08:10.341 0,0 61472/s 113 MiB/s 0 0 00:08:10.341 3,0 61728/s 113 MiB/s 0 0 00:08:10.341 2,0 61792/s 113 MiB/s 0 0 00:08:10.341 1,0 61760/s 113 MiB/s 0 0 00:08:10.341 ==================================================================================== 00:08:10.341 Total 246752/s 963 MiB/s 0 0' 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:10.341 13:38:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:10.341 13:38:12 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.341 13:38:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.341 13:38:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.341 13:38:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.341 13:38:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.341 13:38:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.341 13:38:12 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.341 13:38:12 -- accel/accel.sh@42 -- # jq -r . 00:08:10.341 [2024-07-11 13:38:12.464442] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:10.341 [2024-07-11 13:38:12.464508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438813 ] 00:08:10.341 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.341 [2024-07-11 13:38:12.519470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.341 [2024-07-11 13:38:12.557615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.341 [2024-07-11 13:38:12.557711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.341 [2024-07-11 13:38:12.557777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.341 [2024-07-11 13:38:12.557778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val= 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val= 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val= 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val=0xf 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val= 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val= 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val=decompress 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val= 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val=software 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@23 -- # accel_module=software 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val=32 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val=32 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val=1 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val=Yes 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val= 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:10.341 13:38:12 -- accel/accel.sh@21 -- # val= 00:08:10.341 13:38:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # IFS=: 00:08:10.341 13:38:12 -- accel/accel.sh@20 -- # read -r var val 00:08:11.717 13:38:13 -- accel/accel.sh@21 -- # val= 00:08:11.717 13:38:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.717 13:38:13 -- accel/accel.sh@20 -- # IFS=: 00:08:11.717 13:38:13 -- accel/accel.sh@20 -- # read -r var val 00:08:11.717 13:38:13 -- accel/accel.sh@21 -- # val= 00:08:11.717 13:38:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.717 13:38:13 -- accel/accel.sh@20 -- # IFS=: 00:08:11.717 13:38:13 -- accel/accel.sh@20 -- # read -r var val 00:08:11.717 13:38:13 -- accel/accel.sh@21 -- # val= 00:08:11.717 13:38:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.717 13:38:13 -- accel/accel.sh@20 -- # IFS=: 00:08:11.717 13:38:13 -- accel/accel.sh@20 -- # read -r var val 00:08:11.717 13:38:13 -- accel/accel.sh@21 -- # val= 00:08:11.717 13:38:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.717 13:38:13 -- accel/accel.sh@20 -- # IFS=: 00:08:11.717 13:38:13 -- accel/accel.sh@20 -- # read -r var val 00:08:11.717 13:38:13 -- accel/accel.sh@21 -- # val= 00:08:11.717 13:38:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.717 13:38:13 -- accel/accel.sh@20 -- # IFS=: 00:08:11.717 13:38:13 -- accel/accel.sh@20 -- # read -r var val 00:08:11.717 13:38:13 -- accel/accel.sh@21 -- # val= 00:08:11.718 13:38:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.718 13:38:13 -- accel/accel.sh@20 -- # IFS=: 00:08:11.718 13:38:13 -- accel/accel.sh@20 -- # read -r var val 00:08:11.718 13:38:13 -- accel/accel.sh@21 -- # val= 00:08:11.718 13:38:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.718 13:38:13 -- accel/accel.sh@20 -- # IFS=: 00:08:11.718 13:38:13 -- accel/accel.sh@20 -- # read -r var val 00:08:11.718 13:38:13 -- accel/accel.sh@21 -- # val= 00:08:11.718 13:38:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.718 13:38:13 -- accel/accel.sh@20 -- # IFS=: 00:08:11.718 13:38:13 -- accel/accel.sh@20 -- # read -r var val 00:08:11.718 13:38:13 -- accel/accel.sh@21 -- # val= 00:08:11.718 13:38:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.718 13:38:13 -- accel/accel.sh@20 -- # IFS=: 00:08:11.718 13:38:13 -- accel/accel.sh@20 -- # read -r var val 00:08:11.718 13:38:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:11.718 13:38:13 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:11.718 13:38:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.718 00:08:11.718 real 0m2.601s 00:08:11.718 user 0m9.041s 00:08:11.718 sys 0m0.229s 00:08:11.718 13:38:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.718 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:08:11.718 ************************************ 00:08:11.718 END TEST accel_decomp_mcore 00:08:11.718 ************************************ 00:08:11.718 13:38:13 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:11.718 13:38:13 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:11.718 13:38:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.718 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:08:11.718 ************************************ 00:08:11.718 START TEST accel_decomp_full_mcore 00:08:11.718 ************************************ 00:08:11.718 13:38:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:11.718 13:38:13 -- accel/accel.sh@16 -- # local accel_opc 00:08:11.718 13:38:13 -- accel/accel.sh@17 -- # local accel_module 00:08:11.718 13:38:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:11.718 13:38:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:11.718 13:38:13 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.718 13:38:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.718 13:38:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.718 13:38:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.718 13:38:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.718 13:38:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.718 13:38:13 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.718 13:38:13 -- accel/accel.sh@42 -- # jq -r . 00:08:11.718 [2024-07-11 13:38:13.810154] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:11.718 [2024-07-11 13:38:13.810238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439056 ] 00:08:11.718 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.718 [2024-07-11 13:38:13.867437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.718 [2024-07-11 13:38:13.906510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.718 [2024-07-11 13:38:13.906609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.718 [2024-07-11 13:38:13.906683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.718 [2024-07-11 13:38:13.906684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.654 13:38:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:12.654 00:08:12.654 SPDK Configuration: 00:08:12.654 Core mask: 0xf 00:08:12.654 00:08:12.654 Accel Perf Configuration: 00:08:12.654 Workload Type: decompress 00:08:12.654 Transfer size: 111250 bytes 00:08:12.654 Vector count 1 00:08:12.654 Module: software 00:08:12.654 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:12.654 Queue depth: 32 00:08:12.654 Allocate depth: 32 00:08:12.654 # threads/core: 1 00:08:12.654 Run time: 1 seconds 00:08:12.654 Verify: Yes 00:08:12.654 00:08:12.654 Running for 1 seconds... 00:08:12.654 00:08:12.654 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:12.654 ------------------------------------------------------------------------------------ 00:08:12.654 0,0 4640/s 191 MiB/s 0 0 00:08:12.654 3,0 4672/s 192 MiB/s 0 0 00:08:12.654 2,0 4672/s 192 MiB/s 0 0 00:08:12.654 1,0 4672/s 192 MiB/s 0 0 00:08:12.654 ==================================================================================== 00:08:12.654 Total 18656/s 1979 MiB/s 0 0' 00:08:12.654 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.654 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.654 13:38:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:12.654 13:38:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:12.654 13:38:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.654 13:38:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.654 13:38:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.654 13:38:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.654 13:38:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.654 13:38:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.654 13:38:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.654 13:38:15 -- accel/accel.sh@42 -- # jq -r . 00:08:12.914 [2024-07-11 13:38:15.119763] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:12.914 [2024-07-11 13:38:15.119823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439260 ] 00:08:12.914 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.914 [2024-07-11 13:38:15.174105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.914 [2024-07-11 13:38:15.212389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.914 [2024-07-11 13:38:15.212483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.914 [2024-07-11 13:38:15.212572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.914 [2024-07-11 13:38:15.212573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val= 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val= 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val= 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val=0xf 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val= 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val= 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val=decompress 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val= 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val=software 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@23 -- # accel_module=software 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val=32 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val=32 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val=1 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val=Yes 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val= 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:12.914 13:38:15 -- accel/accel.sh@21 -- # val= 00:08:12.914 13:38:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # IFS=: 00:08:12.914 13:38:15 -- accel/accel.sh@20 -- # read -r var val 00:08:14.292 13:38:16 -- accel/accel.sh@21 -- # val= 00:08:14.292 13:38:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # IFS=: 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # read -r var val 00:08:14.292 13:38:16 -- accel/accel.sh@21 -- # val= 00:08:14.292 13:38:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # IFS=: 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # read -r var val 00:08:14.292 13:38:16 -- accel/accel.sh@21 -- # val= 00:08:14.292 13:38:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # IFS=: 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # read -r var val 00:08:14.292 13:38:16 -- accel/accel.sh@21 -- # val= 00:08:14.292 13:38:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # IFS=: 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # read -r var val 00:08:14.292 13:38:16 -- accel/accel.sh@21 -- # val= 00:08:14.292 13:38:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # IFS=: 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # read -r var val 00:08:14.292 13:38:16 -- accel/accel.sh@21 -- # val= 00:08:14.292 13:38:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # IFS=: 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # read -r var val 00:08:14.292 13:38:16 -- accel/accel.sh@21 -- # val= 00:08:14.292 13:38:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # IFS=: 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # read -r var val 00:08:14.292 13:38:16 -- accel/accel.sh@21 -- # val= 00:08:14.292 13:38:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # IFS=: 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # read -r var val 00:08:14.292 13:38:16 -- accel/accel.sh@21 -- # val= 00:08:14.292 13:38:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # IFS=: 00:08:14.292 13:38:16 -- accel/accel.sh@20 -- # read -r var val 00:08:14.292 13:38:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:14.292 13:38:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:14.292 13:38:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.292 00:08:14.292 real 0m2.626s 00:08:14.292 user 0m9.123s 00:08:14.292 sys 0m0.231s 00:08:14.292 13:38:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.292 13:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:14.292 ************************************ 00:08:14.292 END TEST accel_decomp_full_mcore 00:08:14.292 ************************************ 00:08:14.292 13:38:16 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:14.292 13:38:16 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:14.292 13:38:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.292 13:38:16 -- common/autotest_common.sh@10 -- # set +x 00:08:14.292 ************************************ 00:08:14.292 START TEST accel_decomp_mthread 00:08:14.292 ************************************ 00:08:14.292 13:38:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:14.292 13:38:16 -- accel/accel.sh@16 -- # local accel_opc 00:08:14.292 13:38:16 -- accel/accel.sh@17 -- # local accel_module 00:08:14.292 13:38:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:14.292 13:38:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:14.292 13:38:16 -- accel/accel.sh@12 -- # build_accel_config 00:08:14.292 13:38:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:14.292 13:38:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.292 13:38:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.292 13:38:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:14.292 13:38:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:14.292 13:38:16 -- accel/accel.sh@41 -- # local IFS=, 00:08:14.292 13:38:16 -- accel/accel.sh@42 -- # jq -r . 00:08:14.292 [2024-07-11 13:38:16.473987] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:14.292 [2024-07-11 13:38:16.474064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439495 ] 00:08:14.292 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.292 [2024-07-11 13:38:16.528639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.292 [2024-07-11 13:38:16.565653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.672 13:38:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:15.672 00:08:15.672 SPDK Configuration: 00:08:15.672 Core mask: 0x1 00:08:15.672 00:08:15.672 Accel Perf Configuration: 00:08:15.672 Workload Type: decompress 00:08:15.672 Transfer size: 4096 bytes 00:08:15.672 Vector count 1 00:08:15.672 Module: software 00:08:15.672 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:15.672 Queue depth: 32 00:08:15.672 Allocate depth: 32 00:08:15.672 # threads/core: 2 00:08:15.672 Run time: 1 seconds 00:08:15.672 Verify: Yes 00:08:15.672 00:08:15.672 Running for 1 seconds... 00:08:15.672 00:08:15.672 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:15.672 ------------------------------------------------------------------------------------ 00:08:15.672 0,1 38304/s 70 MiB/s 0 0 00:08:15.672 0,0 38208/s 70 MiB/s 0 0 00:08:15.672 ==================================================================================== 00:08:15.672 Total 76512/s 298 MiB/s 0 0' 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.672 13:38:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.672 13:38:17 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.672 13:38:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:15.672 13:38:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.672 13:38:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.672 13:38:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:15.672 13:38:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:15.672 13:38:17 -- accel/accel.sh@41 -- # local IFS=, 00:08:15.672 13:38:17 -- accel/accel.sh@42 -- # jq -r . 00:08:15.672 [2024-07-11 13:38:17.765626] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:15.672 [2024-07-11 13:38:17.765705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439685 ] 00:08:15.672 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.672 [2024-07-11 13:38:17.820796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.672 [2024-07-11 13:38:17.857287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val= 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val= 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val= 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val=0x1 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val= 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val= 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val=decompress 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val= 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val=software 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@23 -- # accel_module=software 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val=32 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val=32 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val=2 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val=Yes 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val= 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:15.672 13:38:17 -- accel/accel.sh@21 -- # val= 00:08:15.672 13:38:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # IFS=: 00:08:15.672 13:38:17 -- accel/accel.sh@20 -- # read -r var val 00:08:16.611 13:38:19 -- accel/accel.sh@21 -- # val= 00:08:16.611 13:38:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # IFS=: 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # read -r var val 00:08:16.611 13:38:19 -- accel/accel.sh@21 -- # val= 00:08:16.611 13:38:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # IFS=: 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # read -r var val 00:08:16.611 13:38:19 -- accel/accel.sh@21 -- # val= 00:08:16.611 13:38:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # IFS=: 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # read -r var val 00:08:16.611 13:38:19 -- accel/accel.sh@21 -- # val= 00:08:16.611 13:38:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # IFS=: 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # read -r var val 00:08:16.611 13:38:19 -- accel/accel.sh@21 -- # val= 00:08:16.611 13:38:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # IFS=: 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # read -r var val 00:08:16.611 13:38:19 -- accel/accel.sh@21 -- # val= 00:08:16.611 13:38:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # IFS=: 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # read -r var val 00:08:16.611 13:38:19 -- accel/accel.sh@21 -- # val= 00:08:16.611 13:38:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # IFS=: 00:08:16.611 13:38:19 -- accel/accel.sh@20 -- # read -r var val 00:08:16.611 13:38:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:16.611 13:38:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:16.611 13:38:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.611 00:08:16.611 real 0m2.590s 00:08:16.611 user 0m2.376s 00:08:16.611 sys 0m0.223s 00:08:16.611 13:38:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.611 13:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:16.611 ************************************ 00:08:16.611 END TEST accel_decomp_mthread 00:08:16.611 ************************************ 00:08:16.871 13:38:19 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:16.871 13:38:19 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:16.871 13:38:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.871 13:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:16.871 ************************************ 00:08:16.871 START TEST accel_deomp_full_mthread 00:08:16.871 ************************************ 00:08:16.871 13:38:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:16.871 13:38:19 -- accel/accel.sh@16 -- # local accel_opc 00:08:16.871 13:38:19 -- accel/accel.sh@17 -- # local accel_module 00:08:16.871 13:38:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:16.871 13:38:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:16.871 13:38:19 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.871 13:38:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:16.871 13:38:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.871 13:38:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.871 13:38:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:16.871 13:38:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:16.871 13:38:19 -- accel/accel.sh@41 -- # local IFS=, 00:08:16.871 13:38:19 -- accel/accel.sh@42 -- # jq -r . 00:08:16.871 [2024-07-11 13:38:19.102729] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:16.871 [2024-07-11 13:38:19.102787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439916 ] 00:08:16.871 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.871 [2024-07-11 13:38:19.156858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.871 [2024-07-11 13:38:19.193868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.251 13:38:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:18.251 00:08:18.251 SPDK Configuration: 00:08:18.251 Core mask: 0x1 00:08:18.251 00:08:18.251 Accel Perf Configuration: 00:08:18.251 Workload Type: decompress 00:08:18.251 Transfer size: 111250 bytes 00:08:18.251 Vector count 1 00:08:18.251 Module: software 00:08:18.251 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:18.251 Queue depth: 32 00:08:18.251 Allocate depth: 32 00:08:18.251 # threads/core: 2 00:08:18.251 Run time: 1 seconds 00:08:18.251 Verify: Yes 00:08:18.251 00:08:18.251 Running for 1 seconds... 00:08:18.251 00:08:18.251 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:18.251 ------------------------------------------------------------------------------------ 00:08:18.251 0,1 2528/s 104 MiB/s 0 0 00:08:18.251 0,0 2528/s 104 MiB/s 0 0 00:08:18.251 ==================================================================================== 00:08:18.251 Total 5056/s 536 MiB/s 0 0' 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.251 13:38:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.251 13:38:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:18.251 13:38:20 -- accel/accel.sh@12 -- # build_accel_config 00:08:18.251 13:38:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:18.251 13:38:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.251 13:38:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.251 13:38:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:18.251 13:38:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:18.251 13:38:20 -- accel/accel.sh@41 -- # local IFS=, 00:08:18.251 13:38:20 -- accel/accel.sh@42 -- # jq -r . 00:08:18.251 [2024-07-11 13:38:20.414382] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:18.251 [2024-07-11 13:38:20.414459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440104 ] 00:08:18.251 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.251 [2024-07-11 13:38:20.469666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.251 [2024-07-11 13:38:20.507688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.251 13:38:20 -- accel/accel.sh@21 -- # val= 00:08:18.251 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.251 13:38:20 -- accel/accel.sh@21 -- # val= 00:08:18.251 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.251 13:38:20 -- accel/accel.sh@21 -- # val= 00:08:18.251 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.251 13:38:20 -- accel/accel.sh@21 -- # val=0x1 00:08:18.251 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.251 13:38:20 -- accel/accel.sh@21 -- # val= 00:08:18.251 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.251 13:38:20 -- accel/accel.sh@21 -- # val= 00:08:18.251 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.251 13:38:20 -- accel/accel.sh@21 -- # val=decompress 00:08:18.251 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.251 13:38:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.251 13:38:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:18.251 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.251 13:38:20 -- accel/accel.sh@21 -- # val= 00:08:18.251 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.251 13:38:20 -- accel/accel.sh@21 -- # val=software 00:08:18.251 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.251 13:38:20 -- accel/accel.sh@23 -- # accel_module=software 00:08:18.251 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.252 13:38:20 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:18.252 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.252 13:38:20 -- accel/accel.sh@21 -- # val=32 00:08:18.252 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.252 13:38:20 -- accel/accel.sh@21 -- # val=32 00:08:18.252 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.252 13:38:20 -- accel/accel.sh@21 -- # val=2 00:08:18.252 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.252 13:38:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:18.252 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.252 13:38:20 -- accel/accel.sh@21 -- # val=Yes 00:08:18.252 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.252 13:38:20 -- accel/accel.sh@21 -- # val= 00:08:18.252 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:18.252 13:38:20 -- accel/accel.sh@21 -- # val= 00:08:18.252 13:38:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # IFS=: 00:08:18.252 13:38:20 -- accel/accel.sh@20 -- # read -r var val 00:08:19.632 13:38:21 -- accel/accel.sh@21 -- # val= 00:08:19.632 13:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # IFS=: 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # read -r var val 00:08:19.632 13:38:21 -- accel/accel.sh@21 -- # val= 00:08:19.632 13:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # IFS=: 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # read -r var val 00:08:19.632 13:38:21 -- accel/accel.sh@21 -- # val= 00:08:19.632 13:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # IFS=: 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # read -r var val 00:08:19.632 13:38:21 -- accel/accel.sh@21 -- # val= 00:08:19.632 13:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # IFS=: 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # read -r var val 00:08:19.632 13:38:21 -- accel/accel.sh@21 -- # val= 00:08:19.632 13:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # IFS=: 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # read -r var val 00:08:19.632 13:38:21 -- accel/accel.sh@21 -- # val= 00:08:19.632 13:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # IFS=: 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # read -r var val 00:08:19.632 13:38:21 -- accel/accel.sh@21 -- # val= 00:08:19.632 13:38:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # IFS=: 00:08:19.632 13:38:21 -- accel/accel.sh@20 -- # read -r var val 00:08:19.632 13:38:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:19.632 13:38:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:19.632 13:38:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.632 00:08:19.632 real 0m2.629s 00:08:19.632 user 0m2.408s 00:08:19.632 sys 0m0.232s 00:08:19.632 13:38:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.632 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:08:19.632 ************************************ 00:08:19.632 END TEST accel_deomp_full_mthread 00:08:19.632 ************************************ 00:08:19.632 13:38:21 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:19.632 13:38:21 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:19.632 13:38:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:19.632 13:38:21 -- accel/accel.sh@129 -- # build_accel_config 00:08:19.632 13:38:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.632 13:38:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:19.632 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:08:19.632 13:38:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.632 13:38:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.632 13:38:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:19.632 13:38:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:19.632 13:38:21 -- accel/accel.sh@41 -- # local IFS=, 00:08:19.632 13:38:21 -- accel/accel.sh@42 -- # jq -r . 00:08:19.632 ************************************ 00:08:19.632 START TEST accel_dif_functional_tests 00:08:19.632 ************************************ 00:08:19.632 13:38:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:19.632 [2024-07-11 13:38:21.787625] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:19.632 [2024-07-11 13:38:21.787673] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440347 ] 00:08:19.632 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.632 [2024-07-11 13:38:21.842508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.632 [2024-07-11 13:38:21.882829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.632 [2024-07-11 13:38:21.884177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.632 [2024-07-11 13:38:21.884181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.632 00:08:19.632 00:08:19.632 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.632 http://cunit.sourceforge.net/ 00:08:19.632 00:08:19.632 00:08:19.632 Suite: accel_dif 00:08:19.632 Test: verify: DIF generated, GUARD check ...passed 00:08:19.632 Test: verify: DIF generated, APPTAG check ...passed 00:08:19.632 Test: verify: DIF generated, REFTAG check ...passed 00:08:19.632 Test: verify: DIF not generated, GUARD check ...[2024-07-11 13:38:21.947444] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:19.632 [2024-07-11 13:38:21.947488] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:19.632 passed 00:08:19.632 Test: verify: DIF not generated, APPTAG check ...[2024-07-11 13:38:21.947519] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:19.632 [2024-07-11 13:38:21.947532] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:19.632 passed 00:08:19.632 Test: verify: DIF not generated, REFTAG check ...[2024-07-11 13:38:21.947550] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:19.632 [2024-07-11 13:38:21.947563] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:19.632 passed 00:08:19.632 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:19.632 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-11 13:38:21.947603] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:19.632 passed 00:08:19.632 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:19.632 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:19.632 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:19.632 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-11 13:38:21.947699] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:19.632 passed 00:08:19.632 Test: generate copy: DIF generated, GUARD check ...passed 00:08:19.632 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:19.632 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:19.632 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:19.632 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:19.632 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:19.632 Test: generate copy: iovecs-len validate ...[2024-07-11 13:38:21.947862] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:19.632 passed 00:08:19.632 Test: generate copy: buffer alignment validate ...passed 00:08:19.632 00:08:19.632 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.632 suites 1 1 n/a 0 0 00:08:19.632 tests 20 20 20 0 0 00:08:19.632 asserts 204 204 204 0 n/a 00:08:19.632 00:08:19.632 Elapsed time = 0.000 seconds 00:08:19.892 00:08:19.892 real 0m0.361s 00:08:19.892 user 0m0.553s 00:08:19.892 sys 0m0.148s 00:08:19.892 13:38:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.892 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:19.892 ************************************ 00:08:19.892 END TEST accel_dif_functional_tests 00:08:19.892 ************************************ 00:08:19.892 00:08:19.892 real 0m54.944s 00:08:19.892 user 1m3.587s 00:08:19.892 sys 0m6.007s 00:08:19.892 13:38:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.892 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:19.892 ************************************ 00:08:19.892 END TEST accel 00:08:19.892 ************************************ 00:08:19.892 13:38:22 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:19.892 13:38:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:19.892 13:38:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.892 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:19.892 ************************************ 00:08:19.892 START TEST accel_rpc 00:08:19.892 ************************************ 00:08:19.892 13:38:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:19.892 * Looking for test storage... 00:08:19.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:19.892 13:38:22 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:19.892 13:38:22 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1440600 00:08:19.892 13:38:22 -- accel/accel_rpc.sh@15 -- # waitforlisten 1440600 00:08:19.892 13:38:22 -- common/autotest_common.sh@819 -- # '[' -z 1440600 ']' 00:08:19.892 13:38:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.892 13:38:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:19.892 13:38:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.892 13:38:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:19.892 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:19.892 13:38:22 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:19.892 [2024-07-11 13:38:22.309968] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:19.892 [2024-07-11 13:38:22.310018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440600 ] 00:08:19.893 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.186 [2024-07-11 13:38:22.363394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.186 [2024-07-11 13:38:22.402011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:20.186 [2024-07-11 13:38:22.402131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.186 13:38:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:20.186 13:38:22 -- common/autotest_common.sh@852 -- # return 0 00:08:20.186 13:38:22 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:20.186 13:38:22 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:20.186 13:38:22 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:20.186 13:38:22 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:20.186 13:38:22 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:20.186 13:38:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.186 13:38:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.186 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:20.186 ************************************ 00:08:20.186 START TEST accel_assign_opcode 00:08:20.186 ************************************ 00:08:20.186 13:38:22 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:08:20.186 13:38:22 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:20.186 13:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.186 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:20.186 [2024-07-11 13:38:22.438494] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:20.186 13:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.186 13:38:22 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:20.186 13:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.186 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:20.186 [2024-07-11 13:38:22.446508] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:20.186 13:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.186 13:38:22 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:20.186 13:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.186 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:20.186 13:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.186 13:38:22 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:20.186 13:38:22 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:20.186 13:38:22 -- accel/accel_rpc.sh@42 -- # grep software 00:08:20.186 13:38:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.186 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:20.186 13:38:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.445 software 00:08:20.445 00:08:20.445 real 0m0.215s 00:08:20.445 user 0m0.040s 00:08:20.445 sys 0m0.007s 00:08:20.445 13:38:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.445 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:08:20.445 ************************************ 00:08:20.445 END TEST accel_assign_opcode 00:08:20.445 ************************************ 00:08:20.445 13:38:22 -- accel/accel_rpc.sh@55 -- # killprocess 1440600 00:08:20.445 13:38:22 -- common/autotest_common.sh@926 -- # '[' -z 1440600 ']' 00:08:20.445 13:38:22 -- common/autotest_common.sh@930 -- # kill -0 1440600 00:08:20.445 13:38:22 -- common/autotest_common.sh@931 -- # uname 00:08:20.445 13:38:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:20.445 13:38:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1440600 00:08:20.445 13:38:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:20.445 13:38:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:20.445 13:38:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1440600' 00:08:20.445 killing process with pid 1440600 00:08:20.445 13:38:22 -- common/autotest_common.sh@945 -- # kill 1440600 00:08:20.445 13:38:22 -- common/autotest_common.sh@950 -- # wait 1440600 00:08:20.704 00:08:20.704 real 0m0.843s 00:08:20.704 user 0m0.771s 00:08:20.704 sys 0m0.362s 00:08:20.704 13:38:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.704 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:08:20.704 ************************************ 00:08:20.704 END TEST accel_rpc 00:08:20.704 ************************************ 00:08:20.705 13:38:23 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:20.705 13:38:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:20.705 13:38:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.705 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:08:20.705 ************************************ 00:08:20.705 START TEST app_cmdline 00:08:20.705 ************************************ 00:08:20.705 13:38:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:20.705 * Looking for test storage... 00:08:20.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:20.705 13:38:23 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:20.705 13:38:23 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1440741 00:08:20.705 13:38:23 -- app/cmdline.sh@18 -- # waitforlisten 1440741 00:08:20.705 13:38:23 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:20.705 13:38:23 -- common/autotest_common.sh@819 -- # '[' -z 1440741 ']' 00:08:20.705 13:38:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.705 13:38:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:20.705 13:38:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.705 13:38:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:20.705 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:08:20.964 [2024-07-11 13:38:23.186767] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:20.964 [2024-07-11 13:38:23.186821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440741 ] 00:08:20.964 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.964 [2024-07-11 13:38:23.238127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.964 [2024-07-11 13:38:23.276221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:20.964 [2024-07-11 13:38:23.276343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.531 13:38:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:21.531 13:38:23 -- common/autotest_common.sh@852 -- # return 0 00:08:21.531 13:38:23 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:21.789 { 00:08:21.789 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:08:21.789 "fields": { 00:08:21.789 "major": 24, 00:08:21.789 "minor": 1, 00:08:21.789 "patch": 1, 00:08:21.789 "suffix": "-pre", 00:08:21.789 "commit": "4b94202c6" 00:08:21.789 } 00:08:21.789 } 00:08:21.789 13:38:24 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:21.789 13:38:24 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:21.789 13:38:24 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:21.789 13:38:24 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:21.789 13:38:24 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:21.789 13:38:24 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:21.789 13:38:24 -- app/cmdline.sh@26 -- # sort 00:08:21.789 13:38:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:21.789 13:38:24 -- common/autotest_common.sh@10 -- # set +x 00:08:21.789 13:38:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:21.789 13:38:24 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:21.789 13:38:24 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:21.789 13:38:24 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:21.789 13:38:24 -- common/autotest_common.sh@640 -- # local es=0 00:08:21.789 13:38:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:21.789 13:38:24 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.789 13:38:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.789 13:38:24 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.789 13:38:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.789 13:38:24 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.789 13:38:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:21.789 13:38:24 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:21.789 13:38:24 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:21.789 13:38:24 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:22.048 request: 00:08:22.048 { 00:08:22.048 "method": "env_dpdk_get_mem_stats", 00:08:22.048 "req_id": 1 00:08:22.048 } 00:08:22.048 Got JSON-RPC error response 00:08:22.048 response: 00:08:22.048 { 00:08:22.048 "code": -32601, 00:08:22.048 "message": "Method not found" 00:08:22.048 } 00:08:22.048 13:38:24 -- common/autotest_common.sh@643 -- # es=1 00:08:22.048 13:38:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:22.048 13:38:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:22.048 13:38:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:22.048 13:38:24 -- app/cmdline.sh@1 -- # killprocess 1440741 00:08:22.048 13:38:24 -- common/autotest_common.sh@926 -- # '[' -z 1440741 ']' 00:08:22.048 13:38:24 -- common/autotest_common.sh@930 -- # kill -0 1440741 00:08:22.048 13:38:24 -- common/autotest_common.sh@931 -- # uname 00:08:22.048 13:38:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:22.048 13:38:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1440741 00:08:22.048 13:38:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:22.048 13:38:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:22.048 13:38:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1440741' 00:08:22.048 killing process with pid 1440741 00:08:22.048 13:38:24 -- common/autotest_common.sh@945 -- # kill 1440741 00:08:22.048 13:38:24 -- common/autotest_common.sh@950 -- # wait 1440741 00:08:22.308 00:08:22.308 real 0m1.604s 00:08:22.308 user 0m1.916s 00:08:22.308 sys 0m0.393s 00:08:22.308 13:38:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.308 13:38:24 -- common/autotest_common.sh@10 -- # set +x 00:08:22.308 ************************************ 00:08:22.308 END TEST app_cmdline 00:08:22.308 ************************************ 00:08:22.308 13:38:24 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:22.308 13:38:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:22.308 13:38:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.308 13:38:24 -- common/autotest_common.sh@10 -- # set +x 00:08:22.308 ************************************ 00:08:22.308 START TEST version 00:08:22.308 ************************************ 00:08:22.308 13:38:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:22.568 * Looking for test storage... 00:08:22.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:22.568 13:38:24 -- app/version.sh@17 -- # get_header_version major 00:08:22.568 13:38:24 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:22.568 13:38:24 -- app/version.sh@14 -- # cut -f2 00:08:22.568 13:38:24 -- app/version.sh@14 -- # tr -d '"' 00:08:22.568 13:38:24 -- app/version.sh@17 -- # major=24 00:08:22.568 13:38:24 -- app/version.sh@18 -- # get_header_version minor 00:08:22.568 13:38:24 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:22.568 13:38:24 -- app/version.sh@14 -- # cut -f2 00:08:22.568 13:38:24 -- app/version.sh@14 -- # tr -d '"' 00:08:22.568 13:38:24 -- app/version.sh@18 -- # minor=1 00:08:22.568 13:38:24 -- app/version.sh@19 -- # get_header_version patch 00:08:22.568 13:38:24 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:22.568 13:38:24 -- app/version.sh@14 -- # cut -f2 00:08:22.568 13:38:24 -- app/version.sh@14 -- # tr -d '"' 00:08:22.568 13:38:24 -- app/version.sh@19 -- # patch=1 00:08:22.568 13:38:24 -- app/version.sh@20 -- # get_header_version suffix 00:08:22.568 13:38:24 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:22.568 13:38:24 -- app/version.sh@14 -- # cut -f2 00:08:22.568 13:38:24 -- app/version.sh@14 -- # tr -d '"' 00:08:22.568 13:38:24 -- app/version.sh@20 -- # suffix=-pre 00:08:22.568 13:38:24 -- app/version.sh@22 -- # version=24.1 00:08:22.568 13:38:24 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:22.568 13:38:24 -- app/version.sh@25 -- # version=24.1.1 00:08:22.568 13:38:24 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:22.568 13:38:24 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:22.568 13:38:24 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:22.568 13:38:24 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:22.568 13:38:24 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:22.568 00:08:22.568 real 0m0.149s 00:08:22.568 user 0m0.075s 00:08:22.568 sys 0m0.109s 00:08:22.568 13:38:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.568 13:38:24 -- common/autotest_common.sh@10 -- # set +x 00:08:22.568 ************************************ 00:08:22.568 END TEST version 00:08:22.568 ************************************ 00:08:22.568 13:38:24 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:08:22.568 13:38:24 -- spdk/autotest.sh@204 -- # uname -s 00:08:22.568 13:38:24 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:08:22.568 13:38:24 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:22.568 13:38:24 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:22.568 13:38:24 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:08:22.568 13:38:24 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:08:22.568 13:38:24 -- spdk/autotest.sh@268 -- # timing_exit lib 00:08:22.568 13:38:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:22.568 13:38:24 -- common/autotest_common.sh@10 -- # set +x 00:08:22.568 13:38:24 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:22.568 13:38:24 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:08:22.568 13:38:24 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:08:22.568 13:38:24 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:08:22.568 13:38:24 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:08:22.568 13:38:24 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:08:22.568 13:38:24 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:22.568 13:38:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:22.568 13:38:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.568 13:38:24 -- common/autotest_common.sh@10 -- # set +x 00:08:22.568 ************************************ 00:08:22.568 START TEST nvmf_tcp 00:08:22.568 ************************************ 00:08:22.568 13:38:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:22.568 * Looking for test storage... 00:08:22.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:22.568 13:38:25 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:22.568 13:38:25 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:22.568 13:38:25 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.568 13:38:25 -- nvmf/common.sh@7 -- # uname -s 00:08:22.568 13:38:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.568 13:38:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.568 13:38:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.568 13:38:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.568 13:38:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.568 13:38:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.568 13:38:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.568 13:38:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.568 13:38:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.568 13:38:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.828 13:38:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:22.828 13:38:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:22.828 13:38:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.828 13:38:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.828 13:38:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.828 13:38:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.828 13:38:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.828 13:38:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.828 13:38:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.828 13:38:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.828 13:38:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.828 13:38:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.828 13:38:25 -- paths/export.sh@5 -- # export PATH 00:08:22.828 13:38:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.828 13:38:25 -- nvmf/common.sh@46 -- # : 0 00:08:22.828 13:38:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:22.828 13:38:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:22.828 13:38:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:22.828 13:38:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.828 13:38:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.828 13:38:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:22.828 13:38:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:22.828 13:38:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:22.828 13:38:25 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:22.828 13:38:25 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:22.828 13:38:25 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:22.828 13:38:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:22.828 13:38:25 -- common/autotest_common.sh@10 -- # set +x 00:08:22.828 13:38:25 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:22.828 13:38:25 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:22.828 13:38:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:22.828 13:38:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.828 13:38:25 -- common/autotest_common.sh@10 -- # set +x 00:08:22.828 ************************************ 00:08:22.828 START TEST nvmf_example 00:08:22.828 ************************************ 00:08:22.829 13:38:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:22.829 * Looking for test storage... 00:08:22.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.829 13:38:25 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.829 13:38:25 -- nvmf/common.sh@7 -- # uname -s 00:08:22.829 13:38:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.829 13:38:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.829 13:38:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.829 13:38:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.829 13:38:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.829 13:38:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.829 13:38:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.829 13:38:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.829 13:38:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.829 13:38:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.829 13:38:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:22.829 13:38:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:22.829 13:38:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.829 13:38:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.829 13:38:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.829 13:38:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.829 13:38:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.829 13:38:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.829 13:38:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.829 13:38:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.829 13:38:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.829 13:38:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.829 13:38:25 -- paths/export.sh@5 -- # export PATH 00:08:22.829 13:38:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.829 13:38:25 -- nvmf/common.sh@46 -- # : 0 00:08:22.829 13:38:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:22.829 13:38:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:22.829 13:38:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:22.829 13:38:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.829 13:38:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.829 13:38:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:22.829 13:38:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:22.829 13:38:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:22.829 13:38:25 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:22.829 13:38:25 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:22.829 13:38:25 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:22.829 13:38:25 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:22.829 13:38:25 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:22.829 13:38:25 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:22.829 13:38:25 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:22.829 13:38:25 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:22.829 13:38:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:22.829 13:38:25 -- common/autotest_common.sh@10 -- # set +x 00:08:22.829 13:38:25 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:22.829 13:38:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:22.829 13:38:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.829 13:38:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:22.829 13:38:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:22.829 13:38:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:22.829 13:38:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.829 13:38:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.829 13:38:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.829 13:38:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:22.829 13:38:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:22.829 13:38:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:22.829 13:38:25 -- common/autotest_common.sh@10 -- # set +x 00:08:28.130 13:38:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:28.130 13:38:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:28.130 13:38:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:28.130 13:38:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:28.130 13:38:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:28.130 13:38:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:28.130 13:38:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:28.130 13:38:29 -- nvmf/common.sh@294 -- # net_devs=() 00:08:28.130 13:38:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:28.130 13:38:29 -- nvmf/common.sh@295 -- # e810=() 00:08:28.130 13:38:29 -- nvmf/common.sh@295 -- # local -ga e810 00:08:28.130 13:38:29 -- nvmf/common.sh@296 -- # x722=() 00:08:28.130 13:38:29 -- nvmf/common.sh@296 -- # local -ga x722 00:08:28.130 13:38:29 -- nvmf/common.sh@297 -- # mlx=() 00:08:28.130 13:38:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:28.130 13:38:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.130 13:38:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.130 13:38:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.130 13:38:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.130 13:38:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.130 13:38:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.130 13:38:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.130 13:38:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.130 13:38:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.130 13:38:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.130 13:38:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.130 13:38:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:28.130 13:38:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:28.130 13:38:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:28.130 13:38:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:28.130 13:38:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:28.130 13:38:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:28.130 13:38:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:28.130 13:38:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:28.130 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:28.130 13:38:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:28.130 13:38:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:28.130 13:38:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.130 13:38:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.130 13:38:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:28.130 13:38:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:28.131 13:38:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:28.131 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:28.131 13:38:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:28.131 13:38:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:28.131 13:38:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.131 13:38:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.131 13:38:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:28.131 13:38:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:28.131 13:38:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:28.131 13:38:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:28.131 13:38:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:28.131 13:38:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.131 13:38:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:28.131 13:38:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.131 13:38:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:28.131 Found net devices under 0000:86:00.0: cvl_0_0 00:08:28.131 13:38:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.131 13:38:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:28.131 13:38:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.131 13:38:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:28.131 13:38:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.131 13:38:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:28.131 Found net devices under 0000:86:00.1: cvl_0_1 00:08:28.131 13:38:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.131 13:38:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:28.131 13:38:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:28.131 13:38:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:28.131 13:38:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:28.131 13:38:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:28.131 13:38:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.131 13:38:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.131 13:38:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.131 13:38:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:28.131 13:38:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.131 13:38:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.131 13:38:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:28.131 13:38:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.131 13:38:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.131 13:38:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:28.131 13:38:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:28.131 13:38:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.131 13:38:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.131 13:38:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.131 13:38:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.131 13:38:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:28.131 13:38:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.131 13:38:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.131 13:38:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.131 13:38:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:28.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:08:28.131 00:08:28.131 --- 10.0.0.2 ping statistics --- 00:08:28.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.131 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:08:28.131 13:38:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:08:28.131 00:08:28.131 --- 10.0.0.1 ping statistics --- 00:08:28.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.131 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:08:28.131 13:38:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.131 13:38:29 -- nvmf/common.sh@410 -- # return 0 00:08:28.131 13:38:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:28.131 13:38:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.131 13:38:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:28.131 13:38:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:28.131 13:38:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.131 13:38:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:28.131 13:38:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:28.131 13:38:30 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:28.131 13:38:30 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:28.131 13:38:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:28.131 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.131 13:38:30 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:28.131 13:38:30 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:28.131 13:38:30 -- target/nvmf_example.sh@34 -- # nvmfpid=1444128 00:08:28.131 13:38:30 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.131 13:38:30 -- target/nvmf_example.sh@36 -- # waitforlisten 1444128 00:08:28.131 13:38:30 -- common/autotest_common.sh@819 -- # '[' -z 1444128 ']' 00:08:28.131 13:38:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.131 13:38:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:28.131 13:38:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.131 13:38:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:28.131 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.131 13:38:30 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:28.131 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.700 13:38:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:28.700 13:38:30 -- common/autotest_common.sh@852 -- # return 0 00:08:28.700 13:38:30 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:28.700 13:38:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:28.700 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.700 13:38:30 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.700 13:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.700 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.700 13:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.700 13:38:30 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:28.700 13:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.700 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.700 13:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.700 13:38:30 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:28.700 13:38:30 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:28.700 13:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.700 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.700 13:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.700 13:38:30 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:28.700 13:38:30 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.700 13:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.700 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.700 13:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.700 13:38:30 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.700 13:38:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.700 13:38:30 -- common/autotest_common.sh@10 -- # set +x 00:08:28.700 13:38:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.700 13:38:30 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:28.700 13:38:30 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:28.700 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.913 Initializing NVMe Controllers 00:08:40.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:40.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:40.913 Initialization complete. Launching workers. 00:08:40.913 ======================================================== 00:08:40.913 Latency(us) 00:08:40.913 Device Information : IOPS MiB/s Average min max 00:08:40.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17904.51 69.94 3574.30 678.43 15527.30 00:08:40.913 ======================================================== 00:08:40.913 Total : 17904.51 69.94 3574.30 678.43 15527.30 00:08:40.913 00:08:40.913 13:38:41 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:40.913 13:38:41 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:40.913 13:38:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:40.913 13:38:41 -- nvmf/common.sh@116 -- # sync 00:08:40.913 13:38:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:40.913 13:38:41 -- nvmf/common.sh@119 -- # set +e 00:08:40.913 13:38:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:40.913 13:38:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:40.913 rmmod nvme_tcp 00:08:40.913 rmmod nvme_fabrics 00:08:40.913 rmmod nvme_keyring 00:08:40.913 13:38:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:40.913 13:38:41 -- nvmf/common.sh@123 -- # set -e 00:08:40.913 13:38:41 -- nvmf/common.sh@124 -- # return 0 00:08:40.913 13:38:41 -- nvmf/common.sh@477 -- # '[' -n 1444128 ']' 00:08:40.913 13:38:41 -- nvmf/common.sh@478 -- # killprocess 1444128 00:08:40.913 13:38:41 -- common/autotest_common.sh@926 -- # '[' -z 1444128 ']' 00:08:40.913 13:38:41 -- common/autotest_common.sh@930 -- # kill -0 1444128 00:08:40.913 13:38:41 -- common/autotest_common.sh@931 -- # uname 00:08:40.913 13:38:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:40.913 13:38:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1444128 00:08:40.913 13:38:41 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:40.913 13:38:41 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:40.913 13:38:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1444128' 00:08:40.914 killing process with pid 1444128 00:08:40.914 13:38:41 -- common/autotest_common.sh@945 -- # kill 1444128 00:08:40.914 13:38:41 -- common/autotest_common.sh@950 -- # wait 1444128 00:08:40.914 nvmf threads initialize successfully 00:08:40.914 bdev subsystem init successfully 00:08:40.914 created a nvmf target service 00:08:40.914 create targets's poll groups done 00:08:40.914 all subsystems of target started 00:08:40.914 nvmf target is running 00:08:40.914 all subsystems of target stopped 00:08:40.914 destroy targets's poll groups done 00:08:40.914 destroyed the nvmf target service 00:08:40.914 bdev subsystem finish successfully 00:08:40.914 nvmf threads destroy successfully 00:08:40.914 13:38:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:40.914 13:38:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:40.914 13:38:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:40.914 13:38:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.914 13:38:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:40.914 13:38:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.914 13:38:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.914 13:38:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.173 13:38:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:41.173 13:38:43 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:41.173 13:38:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:41.173 13:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:41.173 00:08:41.173 real 0m18.510s 00:08:41.173 user 0m45.489s 00:08:41.173 sys 0m4.963s 00:08:41.173 13:38:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.173 13:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:41.173 ************************************ 00:08:41.173 END TEST nvmf_example 00:08:41.173 ************************************ 00:08:41.173 13:38:43 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:41.173 13:38:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:41.173 13:38:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.173 13:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:41.173 ************************************ 00:08:41.173 START TEST nvmf_filesystem 00:08:41.173 ************************************ 00:08:41.173 13:38:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:41.434 * Looking for test storage... 00:08:41.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.435 13:38:43 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:41.435 13:38:43 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:41.435 13:38:43 -- common/autotest_common.sh@34 -- # set -e 00:08:41.435 13:38:43 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:41.435 13:38:43 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:41.435 13:38:43 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:41.435 13:38:43 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:41.435 13:38:43 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:41.435 13:38:43 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:41.435 13:38:43 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:41.435 13:38:43 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:41.435 13:38:43 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:41.435 13:38:43 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:41.435 13:38:43 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:41.435 13:38:43 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:41.435 13:38:43 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:41.435 13:38:43 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:41.435 13:38:43 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:41.435 13:38:43 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:41.435 13:38:43 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:41.435 13:38:43 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:41.435 13:38:43 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:41.435 13:38:43 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:41.435 13:38:43 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:41.435 13:38:43 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:41.435 13:38:43 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:41.435 13:38:43 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:41.435 13:38:43 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:41.435 13:38:43 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:41.435 13:38:43 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:41.435 13:38:43 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:41.435 13:38:43 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:41.435 13:38:43 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:41.435 13:38:43 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:41.435 13:38:43 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:41.435 13:38:43 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:41.435 13:38:43 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:41.435 13:38:43 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:41.435 13:38:43 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:41.435 13:38:43 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:41.435 13:38:43 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:41.435 13:38:43 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:41.435 13:38:43 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:41.435 13:38:43 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:41.435 13:38:43 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:41.435 13:38:43 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:41.435 13:38:43 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:41.435 13:38:43 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:41.435 13:38:43 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:41.435 13:38:43 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:41.435 13:38:43 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:41.435 13:38:43 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:41.435 13:38:43 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:41.435 13:38:43 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:41.435 13:38:43 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:41.435 13:38:43 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:41.435 13:38:43 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:41.435 13:38:43 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:41.435 13:38:43 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:41.435 13:38:43 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:41.435 13:38:43 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:41.435 13:38:43 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:41.435 13:38:43 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:41.435 13:38:43 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:41.435 13:38:43 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:41.435 13:38:43 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:41.435 13:38:43 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:41.435 13:38:43 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:41.435 13:38:43 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:41.435 13:38:43 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:41.435 13:38:43 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:41.435 13:38:43 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:41.435 13:38:43 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:41.435 13:38:43 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:41.435 13:38:43 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:41.435 13:38:43 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:41.435 13:38:43 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:41.435 13:38:43 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:41.435 13:38:43 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:41.435 13:38:43 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:41.435 13:38:43 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:41.435 13:38:43 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:41.435 13:38:43 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:41.435 13:38:43 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:41.435 13:38:43 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:41.435 13:38:43 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:41.435 13:38:43 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:41.435 13:38:43 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:41.435 13:38:43 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:41.435 13:38:43 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:41.435 13:38:43 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:41.435 13:38:43 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:41.435 13:38:43 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:41.435 13:38:43 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:41.435 13:38:43 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:41.435 13:38:43 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:41.435 13:38:43 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:41.435 13:38:43 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:41.435 13:38:43 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:41.435 13:38:43 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:41.435 13:38:43 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:41.435 13:38:43 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:41.435 #define SPDK_CONFIG_H 00:08:41.435 #define SPDK_CONFIG_APPS 1 00:08:41.435 #define SPDK_CONFIG_ARCH native 00:08:41.435 #undef SPDK_CONFIG_ASAN 00:08:41.435 #undef SPDK_CONFIG_AVAHI 00:08:41.435 #undef SPDK_CONFIG_CET 00:08:41.435 #define SPDK_CONFIG_COVERAGE 1 00:08:41.435 #define SPDK_CONFIG_CROSS_PREFIX 00:08:41.435 #undef SPDK_CONFIG_CRYPTO 00:08:41.435 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:41.435 #undef SPDK_CONFIG_CUSTOMOCF 00:08:41.435 #undef SPDK_CONFIG_DAOS 00:08:41.435 #define SPDK_CONFIG_DAOS_DIR 00:08:41.435 #define SPDK_CONFIG_DEBUG 1 00:08:41.435 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:41.435 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:41.435 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:41.435 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:41.435 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:41.435 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:41.435 #define SPDK_CONFIG_EXAMPLES 1 00:08:41.435 #undef SPDK_CONFIG_FC 00:08:41.435 #define SPDK_CONFIG_FC_PATH 00:08:41.435 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:41.436 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:41.436 #undef SPDK_CONFIG_FUSE 00:08:41.436 #undef SPDK_CONFIG_FUZZER 00:08:41.436 #define SPDK_CONFIG_FUZZER_LIB 00:08:41.436 #undef SPDK_CONFIG_GOLANG 00:08:41.436 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:41.436 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:41.436 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:41.436 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:41.436 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:41.436 #define SPDK_CONFIG_IDXD 1 00:08:41.436 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:41.436 #undef SPDK_CONFIG_IPSEC_MB 00:08:41.436 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:41.436 #define SPDK_CONFIG_ISAL 1 00:08:41.436 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:41.436 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:41.436 #define SPDK_CONFIG_LIBDIR 00:08:41.436 #undef SPDK_CONFIG_LTO 00:08:41.436 #define SPDK_CONFIG_MAX_LCORES 00:08:41.436 #define SPDK_CONFIG_NVME_CUSE 1 00:08:41.436 #undef SPDK_CONFIG_OCF 00:08:41.436 #define SPDK_CONFIG_OCF_PATH 00:08:41.436 #define SPDK_CONFIG_OPENSSL_PATH 00:08:41.436 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:41.436 #undef SPDK_CONFIG_PGO_USE 00:08:41.436 #define SPDK_CONFIG_PREFIX /usr/local 00:08:41.436 #undef SPDK_CONFIG_RAID5F 00:08:41.436 #undef SPDK_CONFIG_RBD 00:08:41.436 #define SPDK_CONFIG_RDMA 1 00:08:41.436 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:41.436 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:41.436 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:41.436 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:41.436 #define SPDK_CONFIG_SHARED 1 00:08:41.436 #undef SPDK_CONFIG_SMA 00:08:41.436 #define SPDK_CONFIG_TESTS 1 00:08:41.436 #undef SPDK_CONFIG_TSAN 00:08:41.436 #define SPDK_CONFIG_UBLK 1 00:08:41.436 #define SPDK_CONFIG_UBSAN 1 00:08:41.436 #undef SPDK_CONFIG_UNIT_TESTS 00:08:41.436 #undef SPDK_CONFIG_URING 00:08:41.436 #define SPDK_CONFIG_URING_PATH 00:08:41.436 #undef SPDK_CONFIG_URING_ZNS 00:08:41.436 #undef SPDK_CONFIG_USDT 00:08:41.436 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:41.436 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:41.436 #define SPDK_CONFIG_VFIO_USER 1 00:08:41.436 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:41.436 #define SPDK_CONFIG_VHOST 1 00:08:41.436 #define SPDK_CONFIG_VIRTIO 1 00:08:41.436 #undef SPDK_CONFIG_VTUNE 00:08:41.436 #define SPDK_CONFIG_VTUNE_DIR 00:08:41.436 #define SPDK_CONFIG_WERROR 1 00:08:41.436 #define SPDK_CONFIG_WPDK_DIR 00:08:41.436 #undef SPDK_CONFIG_XNVME 00:08:41.436 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:41.436 13:38:43 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:41.436 13:38:43 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.436 13:38:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.436 13:38:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.436 13:38:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.436 13:38:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.436 13:38:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.436 13:38:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.436 13:38:43 -- paths/export.sh@5 -- # export PATH 00:08:41.436 13:38:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.436 13:38:43 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:41.436 13:38:43 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:41.436 13:38:43 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:41.436 13:38:43 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:41.436 13:38:43 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:41.436 13:38:43 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:41.436 13:38:43 -- pm/common@16 -- # TEST_TAG=N/A 00:08:41.436 13:38:43 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:41.436 13:38:43 -- common/autotest_common.sh@52 -- # : 1 00:08:41.436 13:38:43 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:41.436 13:38:43 -- common/autotest_common.sh@56 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:41.436 13:38:43 -- common/autotest_common.sh@58 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:41.436 13:38:43 -- common/autotest_common.sh@60 -- # : 1 00:08:41.436 13:38:43 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:41.436 13:38:43 -- common/autotest_common.sh@62 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:41.436 13:38:43 -- common/autotest_common.sh@64 -- # : 00:08:41.436 13:38:43 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:41.436 13:38:43 -- common/autotest_common.sh@66 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:41.436 13:38:43 -- common/autotest_common.sh@68 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:41.436 13:38:43 -- common/autotest_common.sh@70 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:41.436 13:38:43 -- common/autotest_common.sh@72 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:41.436 13:38:43 -- common/autotest_common.sh@74 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:41.436 13:38:43 -- common/autotest_common.sh@76 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:41.436 13:38:43 -- common/autotest_common.sh@78 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:41.436 13:38:43 -- common/autotest_common.sh@80 -- # : 1 00:08:41.436 13:38:43 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:41.436 13:38:43 -- common/autotest_common.sh@82 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:41.436 13:38:43 -- common/autotest_common.sh@84 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:41.436 13:38:43 -- common/autotest_common.sh@86 -- # : 1 00:08:41.436 13:38:43 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:41.436 13:38:43 -- common/autotest_common.sh@88 -- # : 1 00:08:41.436 13:38:43 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:41.436 13:38:43 -- common/autotest_common.sh@90 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:41.436 13:38:43 -- common/autotest_common.sh@92 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:41.436 13:38:43 -- common/autotest_common.sh@94 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:41.436 13:38:43 -- common/autotest_common.sh@96 -- # : tcp 00:08:41.436 13:38:43 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:41.436 13:38:43 -- common/autotest_common.sh@98 -- # : 0 00:08:41.436 13:38:43 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:41.437 13:38:43 -- common/autotest_common.sh@100 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:41.437 13:38:43 -- common/autotest_common.sh@102 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:41.437 13:38:43 -- common/autotest_common.sh@104 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:41.437 13:38:43 -- common/autotest_common.sh@106 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:41.437 13:38:43 -- common/autotest_common.sh@108 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:41.437 13:38:43 -- common/autotest_common.sh@110 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:41.437 13:38:43 -- common/autotest_common.sh@112 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:41.437 13:38:43 -- common/autotest_common.sh@114 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:41.437 13:38:43 -- common/autotest_common.sh@116 -- # : 1 00:08:41.437 13:38:43 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:41.437 13:38:43 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:41.437 13:38:43 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:41.437 13:38:43 -- common/autotest_common.sh@120 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:41.437 13:38:43 -- common/autotest_common.sh@122 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:41.437 13:38:43 -- common/autotest_common.sh@124 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:41.437 13:38:43 -- common/autotest_common.sh@126 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:41.437 13:38:43 -- common/autotest_common.sh@128 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:41.437 13:38:43 -- common/autotest_common.sh@130 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:41.437 13:38:43 -- common/autotest_common.sh@132 -- # : v23.11 00:08:41.437 13:38:43 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:41.437 13:38:43 -- common/autotest_common.sh@134 -- # : true 00:08:41.437 13:38:43 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:41.437 13:38:43 -- common/autotest_common.sh@136 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:41.437 13:38:43 -- common/autotest_common.sh@138 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:41.437 13:38:43 -- common/autotest_common.sh@140 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:41.437 13:38:43 -- common/autotest_common.sh@142 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:41.437 13:38:43 -- common/autotest_common.sh@144 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:41.437 13:38:43 -- common/autotest_common.sh@146 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:41.437 13:38:43 -- common/autotest_common.sh@148 -- # : e810 00:08:41.437 13:38:43 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:41.437 13:38:43 -- common/autotest_common.sh@150 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:41.437 13:38:43 -- common/autotest_common.sh@152 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:41.437 13:38:43 -- common/autotest_common.sh@154 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:41.437 13:38:43 -- common/autotest_common.sh@156 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:41.437 13:38:43 -- common/autotest_common.sh@158 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:41.437 13:38:43 -- common/autotest_common.sh@160 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:41.437 13:38:43 -- common/autotest_common.sh@163 -- # : 00:08:41.437 13:38:43 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:41.437 13:38:43 -- common/autotest_common.sh@165 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:41.437 13:38:43 -- common/autotest_common.sh@167 -- # : 0 00:08:41.437 13:38:43 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:41.437 13:38:43 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:41.437 13:38:43 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:41.437 13:38:43 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:41.437 13:38:43 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:41.437 13:38:43 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:41.437 13:38:43 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:41.437 13:38:43 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:41.437 13:38:43 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:41.437 13:38:43 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:41.437 13:38:43 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:41.437 13:38:43 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:41.437 13:38:43 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:41.437 13:38:43 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:41.437 13:38:43 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:41.437 13:38:43 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:41.437 13:38:43 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:41.437 13:38:43 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:41.437 13:38:43 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:41.437 13:38:43 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:41.437 13:38:43 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:41.437 13:38:43 -- common/autotest_common.sh@196 -- # cat 00:08:41.437 13:38:43 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:41.437 13:38:43 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:41.437 13:38:43 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:41.437 13:38:43 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:41.437 13:38:43 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:41.437 13:38:43 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:41.437 13:38:43 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:41.437 13:38:43 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:41.437 13:38:43 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:41.437 13:38:43 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:41.437 13:38:43 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:41.437 13:38:43 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:41.437 13:38:43 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:41.437 13:38:43 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:41.437 13:38:43 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:41.437 13:38:43 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:41.437 13:38:43 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:41.437 13:38:43 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:41.437 13:38:43 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:41.437 13:38:43 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:41.437 13:38:43 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:41.437 13:38:43 -- common/autotest_common.sh@249 -- # valgrind= 00:08:41.437 13:38:43 -- common/autotest_common.sh@255 -- # uname -s 00:08:41.437 13:38:43 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:41.437 13:38:43 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:41.437 13:38:43 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:41.437 13:38:43 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:41.437 13:38:43 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:41.438 13:38:43 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:41.438 13:38:43 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:41.438 13:38:43 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j96 00:08:41.438 13:38:43 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:41.438 13:38:43 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:41.438 13:38:43 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:41.438 13:38:43 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:41.438 13:38:43 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:41.438 13:38:43 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:41.438 13:38:43 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:41.438 13:38:43 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:41.438 13:38:43 -- common/autotest_common.sh@309 -- # [[ -z 1446530 ]] 00:08:41.438 13:38:43 -- common/autotest_common.sh@309 -- # kill -0 1446530 00:08:41.438 13:38:43 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:41.438 13:38:43 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:41.438 13:38:43 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:41.438 13:38:43 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:41.438 13:38:43 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:41.438 13:38:43 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:41.438 13:38:43 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:41.438 13:38:43 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:41.438 13:38:43 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.VgI1ST 00:08:41.438 13:38:43 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:41.438 13:38:43 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:41.438 13:38:43 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:41.438 13:38:43 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.VgI1ST/tests/target /tmp/spdk.VgI1ST 00:08:41.438 13:38:43 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:41.438 13:38:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:41.438 13:38:43 -- common/autotest_common.sh@318 -- # df -T 00:08:41.438 13:38:43 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:41.438 13:38:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:41.438 13:38:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=950202368 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:41.438 13:38:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=4334227456 00:08:41.438 13:38:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=187964989440 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=195974324224 00:08:41.438 13:38:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=8009334784 00:08:41.438 13:38:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=97933643776 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=97987162112 00:08:41.438 13:38:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:08:41.438 13:38:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=39185489920 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=39194865664 00:08:41.438 13:38:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=9375744 00:08:41.438 13:38:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=97986293760 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=97987162112 00:08:41.438 13:38:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=868352 00:08:41.438 13:38:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:41.438 13:38:43 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # avails["$mount"]=19597426688 00:08:41.438 13:38:43 -- common/autotest_common.sh@353 -- # sizes["$mount"]=19597430784 00:08:41.438 13:38:43 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:41.438 13:38:43 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:41.438 13:38:43 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:41.438 * Looking for test storage... 00:08:41.438 13:38:43 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:41.438 13:38:43 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:41.438 13:38:43 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.438 13:38:43 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:41.438 13:38:43 -- common/autotest_common.sh@363 -- # mount=/ 00:08:41.438 13:38:43 -- common/autotest_common.sh@365 -- # target_space=187964989440 00:08:41.438 13:38:43 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:41.438 13:38:43 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:41.438 13:38:43 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:41.438 13:38:43 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:41.438 13:38:43 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:41.438 13:38:43 -- common/autotest_common.sh@372 -- # new_size=10223927296 00:08:41.438 13:38:43 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:41.438 13:38:43 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.438 13:38:43 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.438 13:38:43 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.438 13:38:43 -- common/autotest_common.sh@380 -- # return 0 00:08:41.438 13:38:43 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:41.438 13:38:43 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:41.438 13:38:43 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:41.438 13:38:43 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:41.438 13:38:43 -- common/autotest_common.sh@1672 -- # true 00:08:41.438 13:38:43 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:41.438 13:38:43 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:41.438 13:38:43 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:41.438 13:38:43 -- common/autotest_common.sh@27 -- # exec 00:08:41.438 13:38:43 -- common/autotest_common.sh@29 -- # exec 00:08:41.438 13:38:43 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:41.438 13:38:43 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:41.438 13:38:43 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:41.438 13:38:43 -- common/autotest_common.sh@18 -- # set -x 00:08:41.438 13:38:43 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.438 13:38:43 -- nvmf/common.sh@7 -- # uname -s 00:08:41.438 13:38:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.438 13:38:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.438 13:38:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.438 13:38:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.438 13:38:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.438 13:38:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.438 13:38:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.438 13:38:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.438 13:38:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.438 13:38:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.438 13:38:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:41.438 13:38:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:41.438 13:38:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.438 13:38:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.438 13:38:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.438 13:38:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.438 13:38:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.438 13:38:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.438 13:38:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.438 13:38:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.438 13:38:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.438 13:38:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.438 13:38:43 -- paths/export.sh@5 -- # export PATH 00:08:41.439 13:38:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.439 13:38:43 -- nvmf/common.sh@46 -- # : 0 00:08:41.439 13:38:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:41.439 13:38:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:41.439 13:38:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:41.439 13:38:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.439 13:38:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.439 13:38:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:41.439 13:38:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:41.439 13:38:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:41.439 13:38:43 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:41.439 13:38:43 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:41.439 13:38:43 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:41.439 13:38:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:41.439 13:38:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.439 13:38:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:41.439 13:38:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:41.439 13:38:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:41.439 13:38:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.439 13:38:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.439 13:38:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.439 13:38:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:41.439 13:38:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:41.439 13:38:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:41.439 13:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:46.715 13:38:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:46.715 13:38:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:46.715 13:38:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:46.715 13:38:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:46.715 13:38:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:46.715 13:38:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:46.715 13:38:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:46.715 13:38:48 -- nvmf/common.sh@294 -- # net_devs=() 00:08:46.715 13:38:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:46.715 13:38:48 -- nvmf/common.sh@295 -- # e810=() 00:08:46.715 13:38:48 -- nvmf/common.sh@295 -- # local -ga e810 00:08:46.715 13:38:48 -- nvmf/common.sh@296 -- # x722=() 00:08:46.715 13:38:48 -- nvmf/common.sh@296 -- # local -ga x722 00:08:46.715 13:38:48 -- nvmf/common.sh@297 -- # mlx=() 00:08:46.715 13:38:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:46.715 13:38:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.715 13:38:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.715 13:38:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.715 13:38:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.715 13:38:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.715 13:38:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.715 13:38:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.715 13:38:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.715 13:38:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.715 13:38:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.715 13:38:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.715 13:38:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:46.715 13:38:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:46.715 13:38:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:46.715 13:38:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:46.715 13:38:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:46.716 13:38:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:46.716 13:38:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.716 13:38:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:46.716 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:46.716 13:38:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.716 13:38:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:46.716 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:46.716 13:38:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:46.716 13:38:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.716 13:38:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.716 13:38:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.716 13:38:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.716 13:38:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:46.716 Found net devices under 0000:86:00.0: cvl_0_0 00:08:46.716 13:38:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.716 13:38:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.716 13:38:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.716 13:38:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.716 13:38:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.716 13:38:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:46.716 Found net devices under 0000:86:00.1: cvl_0_1 00:08:46.716 13:38:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.716 13:38:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:46.716 13:38:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:46.716 13:38:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:46.716 13:38:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:46.716 13:38:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.716 13:38:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.716 13:38:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.716 13:38:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:46.716 13:38:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.716 13:38:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.716 13:38:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:46.716 13:38:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.716 13:38:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.716 13:38:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:46.716 13:38:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:46.716 13:38:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.716 13:38:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.716 13:38:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.716 13:38:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.716 13:38:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:46.716 13:38:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.716 13:38:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.716 13:38:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.716 13:38:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:46.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:08:46.716 00:08:46.716 --- 10.0.0.2 ping statistics --- 00:08:46.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.716 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:46.716 13:38:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:08:46.716 00:08:46.716 --- 10.0.0.1 ping statistics --- 00:08:46.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.716 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:46.716 13:38:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.716 13:38:49 -- nvmf/common.sh@410 -- # return 0 00:08:46.716 13:38:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:46.716 13:38:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.716 13:38:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:46.716 13:38:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:46.716 13:38:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.716 13:38:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:46.716 13:38:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:46.977 13:38:49 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:46.977 13:38:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:46.977 13:38:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:46.977 13:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:46.977 ************************************ 00:08:46.977 START TEST nvmf_filesystem_no_in_capsule 00:08:46.977 ************************************ 00:08:46.977 13:38:49 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:46.977 13:38:49 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:46.977 13:38:49 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:46.977 13:38:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:46.977 13:38:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:46.977 13:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:46.977 13:38:49 -- nvmf/common.sh@469 -- # nvmfpid=1449564 00:08:46.977 13:38:49 -- nvmf/common.sh@470 -- # waitforlisten 1449564 00:08:46.977 13:38:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.977 13:38:49 -- common/autotest_common.sh@819 -- # '[' -z 1449564 ']' 00:08:46.977 13:38:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.977 13:38:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:46.977 13:38:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.977 13:38:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:46.977 13:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:46.977 [2024-07-11 13:38:49.250708] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:46.977 [2024-07-11 13:38:49.250747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.977 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.977 [2024-07-11 13:38:49.308854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.977 [2024-07-11 13:38:49.348629] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:46.977 [2024-07-11 13:38:49.348751] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.977 [2024-07-11 13:38:49.348759] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.977 [2024-07-11 13:38:49.348766] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.977 [2024-07-11 13:38:49.348811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.977 [2024-07-11 13:38:49.348924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.977 [2024-07-11 13:38:49.348948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.977 [2024-07-11 13:38:49.348949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.915 13:38:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:47.915 13:38:50 -- common/autotest_common.sh@852 -- # return 0 00:08:47.915 13:38:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:47.915 13:38:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:47.915 13:38:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.915 13:38:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.915 13:38:50 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:47.915 13:38:50 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:47.915 13:38:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.915 13:38:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.915 [2024-07-11 13:38:50.093667] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.915 13:38:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.915 13:38:50 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:47.915 13:38:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.915 13:38:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.915 Malloc1 00:08:47.915 13:38:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.915 13:38:50 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:47.915 13:38:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.915 13:38:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.915 13:38:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.915 13:38:50 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:47.915 13:38:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.915 13:38:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.915 13:38:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.915 13:38:50 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.915 13:38:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.915 13:38:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.915 [2024-07-11 13:38:50.241244] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.915 13:38:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.915 13:38:50 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:47.915 13:38:50 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:47.915 13:38:50 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:47.915 13:38:50 -- common/autotest_common.sh@1359 -- # local bs 00:08:47.915 13:38:50 -- common/autotest_common.sh@1360 -- # local nb 00:08:47.915 13:38:50 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:47.915 13:38:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.915 13:38:50 -- common/autotest_common.sh@10 -- # set +x 00:08:47.915 13:38:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.915 13:38:50 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:47.915 { 00:08:47.915 "name": "Malloc1", 00:08:47.915 "aliases": [ 00:08:47.915 "164e8f78-c108-4f56-bbef-f3c91dc92f12" 00:08:47.915 ], 00:08:47.915 "product_name": "Malloc disk", 00:08:47.915 "block_size": 512, 00:08:47.915 "num_blocks": 1048576, 00:08:47.915 "uuid": "164e8f78-c108-4f56-bbef-f3c91dc92f12", 00:08:47.915 "assigned_rate_limits": { 00:08:47.915 "rw_ios_per_sec": 0, 00:08:47.915 "rw_mbytes_per_sec": 0, 00:08:47.915 "r_mbytes_per_sec": 0, 00:08:47.915 "w_mbytes_per_sec": 0 00:08:47.915 }, 00:08:47.915 "claimed": true, 00:08:47.915 "claim_type": "exclusive_write", 00:08:47.915 "zoned": false, 00:08:47.915 "supported_io_types": { 00:08:47.915 "read": true, 00:08:47.915 "write": true, 00:08:47.915 "unmap": true, 00:08:47.915 "write_zeroes": true, 00:08:47.915 "flush": true, 00:08:47.915 "reset": true, 00:08:47.915 "compare": false, 00:08:47.915 "compare_and_write": false, 00:08:47.915 "abort": true, 00:08:47.915 "nvme_admin": false, 00:08:47.915 "nvme_io": false 00:08:47.915 }, 00:08:47.915 "memory_domains": [ 00:08:47.915 { 00:08:47.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.915 "dma_device_type": 2 00:08:47.915 } 00:08:47.915 ], 00:08:47.915 "driver_specific": {} 00:08:47.915 } 00:08:47.915 ]' 00:08:47.915 13:38:50 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:47.915 13:38:50 -- common/autotest_common.sh@1362 -- # bs=512 00:08:47.916 13:38:50 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:47.916 13:38:50 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:47.916 13:38:50 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:47.916 13:38:50 -- common/autotest_common.sh@1367 -- # echo 512 00:08:47.916 13:38:50 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:47.916 13:38:50 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:49.319 13:38:51 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:49.319 13:38:51 -- common/autotest_common.sh@1177 -- # local i=0 00:08:49.319 13:38:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:49.319 13:38:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:49.319 13:38:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:51.242 13:38:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:51.242 13:38:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:51.242 13:38:53 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:51.242 13:38:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:51.242 13:38:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:51.242 13:38:53 -- common/autotest_common.sh@1187 -- # return 0 00:08:51.242 13:38:53 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:51.242 13:38:53 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:51.242 13:38:53 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:51.242 13:38:53 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:51.242 13:38:53 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:51.242 13:38:53 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:51.242 13:38:53 -- setup/common.sh@80 -- # echo 536870912 00:08:51.242 13:38:53 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:51.242 13:38:53 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:51.242 13:38:53 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:51.242 13:38:53 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:51.500 13:38:53 -- target/filesystem.sh@69 -- # partprobe 00:08:51.759 13:38:54 -- target/filesystem.sh@70 -- # sleep 1 00:08:53.136 13:38:55 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:53.137 13:38:55 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:53.137 13:38:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:53.137 13:38:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:53.137 13:38:55 -- common/autotest_common.sh@10 -- # set +x 00:08:53.137 ************************************ 00:08:53.137 START TEST filesystem_ext4 00:08:53.137 ************************************ 00:08:53.137 13:38:55 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:53.137 13:38:55 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:53.137 13:38:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:53.137 13:38:55 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:53.137 13:38:55 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:53.137 13:38:55 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:53.137 13:38:55 -- common/autotest_common.sh@904 -- # local i=0 00:08:53.137 13:38:55 -- common/autotest_common.sh@905 -- # local force 00:08:53.137 13:38:55 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:53.137 13:38:55 -- common/autotest_common.sh@908 -- # force=-F 00:08:53.137 13:38:55 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:53.137 mke2fs 1.46.5 (30-Dec-2021) 00:08:53.137 Discarding device blocks: 0/522240 done 00:08:53.137 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:53.137 Filesystem UUID: 9504d417-488b-4bed-8d1d-6f00c43e6da9 00:08:53.137 Superblock backups stored on blocks: 00:08:53.137 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:53.137 00:08:53.137 Allocating group tables: 0/64 done 00:08:53.137 Writing inode tables: 0/64 done 00:08:53.137 Creating journal (8192 blocks): done 00:08:53.137 Writing superblocks and filesystem accounting information: 0/64 done 00:08:53.137 00:08:53.137 13:38:55 -- common/autotest_common.sh@921 -- # return 0 00:08:53.137 13:38:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:53.137 13:38:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:53.396 13:38:55 -- target/filesystem.sh@25 -- # sync 00:08:53.396 13:38:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:53.396 13:38:55 -- target/filesystem.sh@27 -- # sync 00:08:53.396 13:38:55 -- target/filesystem.sh@29 -- # i=0 00:08:53.396 13:38:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:53.396 13:38:55 -- target/filesystem.sh@37 -- # kill -0 1449564 00:08:53.396 13:38:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:53.396 13:38:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:53.396 13:38:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:53.396 13:38:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:53.396 00:08:53.396 real 0m0.503s 00:08:53.396 user 0m0.029s 00:08:53.396 sys 0m0.059s 00:08:53.396 13:38:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.396 13:38:55 -- common/autotest_common.sh@10 -- # set +x 00:08:53.396 ************************************ 00:08:53.396 END TEST filesystem_ext4 00:08:53.396 ************************************ 00:08:53.396 13:38:55 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:53.396 13:38:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:53.396 13:38:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:53.396 13:38:55 -- common/autotest_common.sh@10 -- # set +x 00:08:53.396 ************************************ 00:08:53.396 START TEST filesystem_btrfs 00:08:53.396 ************************************ 00:08:53.396 13:38:55 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:53.396 13:38:55 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:53.396 13:38:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:53.396 13:38:55 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:53.396 13:38:55 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:53.396 13:38:55 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:53.396 13:38:55 -- common/autotest_common.sh@904 -- # local i=0 00:08:53.396 13:38:55 -- common/autotest_common.sh@905 -- # local force 00:08:53.396 13:38:55 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:53.396 13:38:55 -- common/autotest_common.sh@910 -- # force=-f 00:08:53.396 13:38:55 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:53.396 btrfs-progs v6.6.2 00:08:53.396 See https://btrfs.readthedocs.io for more information. 00:08:53.396 00:08:53.396 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:53.396 NOTE: several default settings have changed in version 5.15, please make sure 00:08:53.396 this does not affect your deployments: 00:08:53.396 - DUP for metadata (-m dup) 00:08:53.396 - enabled no-holes (-O no-holes) 00:08:53.396 - enabled free-space-tree (-R free-space-tree) 00:08:53.396 00:08:53.396 Label: (null) 00:08:53.396 UUID: 4aa88a20-15cc-4f94-804b-99366f1bfbcd 00:08:53.396 Node size: 16384 00:08:53.396 Sector size: 4096 00:08:53.396 Filesystem size: 510.00MiB 00:08:53.396 Block group profiles: 00:08:53.396 Data: single 8.00MiB 00:08:53.396 Metadata: DUP 32.00MiB 00:08:53.396 System: DUP 8.00MiB 00:08:53.396 SSD detected: yes 00:08:53.396 Zoned device: no 00:08:53.396 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:53.396 Runtime features: free-space-tree 00:08:53.396 Checksum: crc32c 00:08:53.396 Number of devices: 1 00:08:53.396 Devices: 00:08:53.396 ID SIZE PATH 00:08:53.396 1 510.00MiB /dev/nvme0n1p1 00:08:53.396 00:08:53.396 13:38:55 -- common/autotest_common.sh@921 -- # return 0 00:08:53.396 13:38:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:53.656 13:38:56 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:53.656 13:38:56 -- target/filesystem.sh@25 -- # sync 00:08:53.656 13:38:56 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:53.656 13:38:56 -- target/filesystem.sh@27 -- # sync 00:08:53.656 13:38:56 -- target/filesystem.sh@29 -- # i=0 00:08:53.656 13:38:56 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:53.915 13:38:56 -- target/filesystem.sh@37 -- # kill -0 1449564 00:08:53.915 13:38:56 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:53.915 13:38:56 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:53.915 13:38:56 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:53.915 13:38:56 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:53.915 00:08:53.915 real 0m0.427s 00:08:53.915 user 0m0.024s 00:08:53.915 sys 0m0.120s 00:08:53.915 13:38:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.915 13:38:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.915 ************************************ 00:08:53.915 END TEST filesystem_btrfs 00:08:53.915 ************************************ 00:08:53.915 13:38:56 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:53.915 13:38:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:53.915 13:38:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:53.915 13:38:56 -- common/autotest_common.sh@10 -- # set +x 00:08:53.915 ************************************ 00:08:53.915 START TEST filesystem_xfs 00:08:53.915 ************************************ 00:08:53.915 13:38:56 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:53.915 13:38:56 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:53.915 13:38:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:53.915 13:38:56 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:53.915 13:38:56 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:53.915 13:38:56 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:53.915 13:38:56 -- common/autotest_common.sh@904 -- # local i=0 00:08:53.915 13:38:56 -- common/autotest_common.sh@905 -- # local force 00:08:53.915 13:38:56 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:53.915 13:38:56 -- common/autotest_common.sh@910 -- # force=-f 00:08:53.915 13:38:56 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:53.915 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:53.915 = sectsz=512 attr=2, projid32bit=1 00:08:53.915 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:53.915 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:53.915 data = bsize=4096 blocks=130560, imaxpct=25 00:08:53.915 = sunit=0 swidth=0 blks 00:08:53.915 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:53.915 log =internal log bsize=4096 blocks=16384, version=2 00:08:53.915 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:53.915 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:54.853 Discarding blocks...Done. 00:08:54.853 13:38:57 -- common/autotest_common.sh@921 -- # return 0 00:08:54.853 13:38:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:56.758 13:38:59 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:56.758 13:38:59 -- target/filesystem.sh@25 -- # sync 00:08:56.758 13:38:59 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:56.758 13:38:59 -- target/filesystem.sh@27 -- # sync 00:08:56.758 13:38:59 -- target/filesystem.sh@29 -- # i=0 00:08:56.758 13:38:59 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:56.758 13:38:59 -- target/filesystem.sh@37 -- # kill -0 1449564 00:08:56.758 13:38:59 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:56.758 13:38:59 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:56.758 13:38:59 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:56.758 13:38:59 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:56.758 00:08:56.758 real 0m2.953s 00:08:56.758 user 0m0.022s 00:08:56.758 sys 0m0.072s 00:08:56.758 13:38:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.758 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:08:56.758 ************************************ 00:08:56.758 END TEST filesystem_xfs 00:08:56.758 ************************************ 00:08:56.758 13:38:59 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:57.326 13:38:59 -- target/filesystem.sh@93 -- # sync 00:08:57.326 13:38:59 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.326 13:38:59 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:57.326 13:38:59 -- common/autotest_common.sh@1198 -- # local i=0 00:08:57.326 13:38:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:57.326 13:38:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.326 13:38:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:57.326 13:38:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.326 13:38:59 -- common/autotest_common.sh@1210 -- # return 0 00:08:57.326 13:38:59 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.326 13:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.326 13:38:59 -- common/autotest_common.sh@10 -- # set +x 00:08:57.326 13:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.326 13:38:59 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:57.326 13:38:59 -- target/filesystem.sh@101 -- # killprocess 1449564 00:08:57.326 13:38:59 -- common/autotest_common.sh@926 -- # '[' -z 1449564 ']' 00:08:57.326 13:38:59 -- common/autotest_common.sh@930 -- # kill -0 1449564 00:08:57.326 13:38:59 -- common/autotest_common.sh@931 -- # uname 00:08:57.326 13:38:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:57.326 13:38:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1449564 00:08:57.327 13:38:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:57.327 13:38:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:57.327 13:38:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1449564' 00:08:57.327 killing process with pid 1449564 00:08:57.327 13:38:59 -- common/autotest_common.sh@945 -- # kill 1449564 00:08:57.327 13:38:59 -- common/autotest_common.sh@950 -- # wait 1449564 00:08:57.586 13:39:00 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:57.586 00:08:57.586 real 0m10.836s 00:08:57.586 user 0m42.601s 00:08:57.586 sys 0m1.132s 00:08:57.586 13:39:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.586 13:39:00 -- common/autotest_common.sh@10 -- # set +x 00:08:57.586 ************************************ 00:08:57.586 END TEST nvmf_filesystem_no_in_capsule 00:08:57.586 ************************************ 00:08:57.845 13:39:00 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:57.845 13:39:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:57.845 13:39:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.845 13:39:00 -- common/autotest_common.sh@10 -- # set +x 00:08:57.845 ************************************ 00:08:57.845 START TEST nvmf_filesystem_in_capsule 00:08:57.845 ************************************ 00:08:57.845 13:39:00 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:57.845 13:39:00 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:57.845 13:39:00 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:57.845 13:39:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:57.845 13:39:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:57.845 13:39:00 -- common/autotest_common.sh@10 -- # set +x 00:08:57.845 13:39:00 -- nvmf/common.sh@469 -- # nvmfpid=1451669 00:08:57.845 13:39:00 -- nvmf/common.sh@470 -- # waitforlisten 1451669 00:08:57.845 13:39:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.846 13:39:00 -- common/autotest_common.sh@819 -- # '[' -z 1451669 ']' 00:08:57.846 13:39:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.846 13:39:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:57.846 13:39:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.846 13:39:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:57.846 13:39:00 -- common/autotest_common.sh@10 -- # set +x 00:08:57.846 [2024-07-11 13:39:00.131065] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:57.846 [2024-07-11 13:39:00.131111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.846 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.846 [2024-07-11 13:39:00.189131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.846 [2024-07-11 13:39:00.229084] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:57.846 [2024-07-11 13:39:00.229193] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.846 [2024-07-11 13:39:00.229201] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.846 [2024-07-11 13:39:00.229208] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.846 [2024-07-11 13:39:00.229248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.846 [2024-07-11 13:39:00.229268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.846 [2024-07-11 13:39:00.229369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.846 [2024-07-11 13:39:00.229370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.784 13:39:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:58.784 13:39:00 -- common/autotest_common.sh@852 -- # return 0 00:08:58.784 13:39:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:58.784 13:39:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:58.784 13:39:00 -- common/autotest_common.sh@10 -- # set +x 00:08:58.784 13:39:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.784 13:39:00 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:58.784 13:39:00 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:58.784 13:39:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.784 13:39:00 -- common/autotest_common.sh@10 -- # set +x 00:08:58.784 [2024-07-11 13:39:00.976637] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.784 13:39:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.784 13:39:00 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:58.784 13:39:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.784 13:39:00 -- common/autotest_common.sh@10 -- # set +x 00:08:58.784 Malloc1 00:08:58.784 13:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.784 13:39:01 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:58.784 13:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.784 13:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:58.784 13:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.784 13:39:01 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:58.784 13:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.784 13:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:58.784 13:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.784 13:39:01 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.784 13:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.784 13:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:58.784 [2024-07-11 13:39:01.125804] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.784 13:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.784 13:39:01 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:58.784 13:39:01 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:58.784 13:39:01 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:58.784 13:39:01 -- common/autotest_common.sh@1359 -- # local bs 00:08:58.784 13:39:01 -- common/autotest_common.sh@1360 -- # local nb 00:08:58.784 13:39:01 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:58.784 13:39:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.784 13:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:58.784 13:39:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.784 13:39:01 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:58.784 { 00:08:58.784 "name": "Malloc1", 00:08:58.784 "aliases": [ 00:08:58.784 "c1a6aad4-8a45-4276-8d27-1ef94932d95a" 00:08:58.784 ], 00:08:58.784 "product_name": "Malloc disk", 00:08:58.784 "block_size": 512, 00:08:58.784 "num_blocks": 1048576, 00:08:58.784 "uuid": "c1a6aad4-8a45-4276-8d27-1ef94932d95a", 00:08:58.784 "assigned_rate_limits": { 00:08:58.784 "rw_ios_per_sec": 0, 00:08:58.784 "rw_mbytes_per_sec": 0, 00:08:58.784 "r_mbytes_per_sec": 0, 00:08:58.784 "w_mbytes_per_sec": 0 00:08:58.784 }, 00:08:58.784 "claimed": true, 00:08:58.784 "claim_type": "exclusive_write", 00:08:58.784 "zoned": false, 00:08:58.784 "supported_io_types": { 00:08:58.784 "read": true, 00:08:58.784 "write": true, 00:08:58.784 "unmap": true, 00:08:58.784 "write_zeroes": true, 00:08:58.784 "flush": true, 00:08:58.784 "reset": true, 00:08:58.784 "compare": false, 00:08:58.784 "compare_and_write": false, 00:08:58.784 "abort": true, 00:08:58.784 "nvme_admin": false, 00:08:58.784 "nvme_io": false 00:08:58.784 }, 00:08:58.784 "memory_domains": [ 00:08:58.784 { 00:08:58.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.784 "dma_device_type": 2 00:08:58.784 } 00:08:58.784 ], 00:08:58.784 "driver_specific": {} 00:08:58.784 } 00:08:58.784 ]' 00:08:58.784 13:39:01 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:58.784 13:39:01 -- common/autotest_common.sh@1362 -- # bs=512 00:08:58.784 13:39:01 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:58.784 13:39:01 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:58.784 13:39:01 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:58.784 13:39:01 -- common/autotest_common.sh@1367 -- # echo 512 00:08:58.784 13:39:01 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:58.784 13:39:01 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.163 13:39:02 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:00.163 13:39:02 -- common/autotest_common.sh@1177 -- # local i=0 00:09:00.163 13:39:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:09:00.163 13:39:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:09:00.163 13:39:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:09:02.069 13:39:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:09:02.069 13:39:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:09:02.069 13:39:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:09:02.069 13:39:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:09:02.069 13:39:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:09:02.069 13:39:04 -- common/autotest_common.sh@1187 -- # return 0 00:09:02.069 13:39:04 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:02.069 13:39:04 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:02.069 13:39:04 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:02.069 13:39:04 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:02.069 13:39:04 -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:02.069 13:39:04 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:02.069 13:39:04 -- setup/common.sh@80 -- # echo 536870912 00:09:02.069 13:39:04 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:02.069 13:39:04 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:02.069 13:39:04 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:02.069 13:39:04 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:02.328 13:39:04 -- target/filesystem.sh@69 -- # partprobe 00:09:02.896 13:39:05 -- target/filesystem.sh@70 -- # sleep 1 00:09:04.272 13:39:06 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:04.272 13:39:06 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:04.272 13:39:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:04.272 13:39:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.272 13:39:06 -- common/autotest_common.sh@10 -- # set +x 00:09:04.272 ************************************ 00:09:04.272 START TEST filesystem_in_capsule_ext4 00:09:04.272 ************************************ 00:09:04.272 13:39:06 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:04.272 13:39:06 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:04.272 13:39:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:04.272 13:39:06 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:04.272 13:39:06 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:09:04.272 13:39:06 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:09:04.272 13:39:06 -- common/autotest_common.sh@904 -- # local i=0 00:09:04.272 13:39:06 -- common/autotest_common.sh@905 -- # local force 00:09:04.272 13:39:06 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:09:04.272 13:39:06 -- common/autotest_common.sh@908 -- # force=-F 00:09:04.272 13:39:06 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:04.272 mke2fs 1.46.5 (30-Dec-2021) 00:09:04.272 Discarding device blocks: 0/522240 done 00:09:04.272 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:04.272 Filesystem UUID: 33563bcc-fa96-4f9e-82e3-89dee754aeb4 00:09:04.272 Superblock backups stored on blocks: 00:09:04.272 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:04.272 00:09:04.272 Allocating group tables: 0/64 done 00:09:04.272 Writing inode tables: 0/64 done 00:09:04.272 Creating journal (8192 blocks): done 00:09:04.272 Writing superblocks and filesystem accounting information: 0/64 done 00:09:04.272 00:09:04.272 13:39:06 -- common/autotest_common.sh@921 -- # return 0 00:09:04.272 13:39:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:04.272 13:39:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:04.272 13:39:06 -- target/filesystem.sh@25 -- # sync 00:09:04.534 13:39:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:04.534 13:39:06 -- target/filesystem.sh@27 -- # sync 00:09:04.534 13:39:06 -- target/filesystem.sh@29 -- # i=0 00:09:04.534 13:39:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:04.534 13:39:06 -- target/filesystem.sh@37 -- # kill -0 1451669 00:09:04.534 13:39:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:04.534 13:39:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:04.534 13:39:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:04.534 13:39:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:04.534 00:09:04.534 real 0m0.432s 00:09:04.534 user 0m0.024s 00:09:04.534 sys 0m0.064s 00:09:04.534 13:39:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.534 13:39:06 -- common/autotest_common.sh@10 -- # set +x 00:09:04.534 ************************************ 00:09:04.534 END TEST filesystem_in_capsule_ext4 00:09:04.534 ************************************ 00:09:04.534 13:39:06 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:04.534 13:39:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:04.534 13:39:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.534 13:39:06 -- common/autotest_common.sh@10 -- # set +x 00:09:04.534 ************************************ 00:09:04.534 START TEST filesystem_in_capsule_btrfs 00:09:04.534 ************************************ 00:09:04.535 13:39:06 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:04.535 13:39:06 -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:04.535 13:39:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:04.535 13:39:06 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:04.535 13:39:06 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:09:04.535 13:39:06 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:09:04.535 13:39:06 -- common/autotest_common.sh@904 -- # local i=0 00:09:04.535 13:39:06 -- common/autotest_common.sh@905 -- # local force 00:09:04.535 13:39:06 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:09:04.535 13:39:06 -- common/autotest_common.sh@910 -- # force=-f 00:09:04.535 13:39:06 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:04.794 btrfs-progs v6.6.2 00:09:04.794 See https://btrfs.readthedocs.io for more information. 00:09:04.794 00:09:04.794 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:04.794 NOTE: several default settings have changed in version 5.15, please make sure 00:09:04.794 this does not affect your deployments: 00:09:04.794 - DUP for metadata (-m dup) 00:09:04.794 - enabled no-holes (-O no-holes) 00:09:04.794 - enabled free-space-tree (-R free-space-tree) 00:09:04.794 00:09:04.794 Label: (null) 00:09:04.794 UUID: 840be9cb-066c-4280-96e4-dcda9eb2b792 00:09:04.794 Node size: 16384 00:09:04.794 Sector size: 4096 00:09:04.794 Filesystem size: 510.00MiB 00:09:04.794 Block group profiles: 00:09:04.794 Data: single 8.00MiB 00:09:04.794 Metadata: DUP 32.00MiB 00:09:04.794 System: DUP 8.00MiB 00:09:04.794 SSD detected: yes 00:09:04.794 Zoned device: no 00:09:04.794 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:04.794 Runtime features: free-space-tree 00:09:04.794 Checksum: crc32c 00:09:04.794 Number of devices: 1 00:09:04.794 Devices: 00:09:04.794 ID SIZE PATH 00:09:04.794 1 510.00MiB /dev/nvme0n1p1 00:09:04.794 00:09:04.794 13:39:07 -- common/autotest_common.sh@921 -- # return 0 00:09:04.794 13:39:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:05.360 13:39:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:05.360 13:39:07 -- target/filesystem.sh@25 -- # sync 00:09:05.360 13:39:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:05.360 13:39:07 -- target/filesystem.sh@27 -- # sync 00:09:05.360 13:39:07 -- target/filesystem.sh@29 -- # i=0 00:09:05.360 13:39:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:05.360 13:39:07 -- target/filesystem.sh@37 -- # kill -0 1451669 00:09:05.360 13:39:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:05.360 13:39:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:05.360 13:39:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:05.360 13:39:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:05.360 00:09:05.360 real 0m0.897s 00:09:05.360 user 0m0.029s 00:09:05.360 sys 0m0.132s 00:09:05.360 13:39:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.360 13:39:07 -- common/autotest_common.sh@10 -- # set +x 00:09:05.360 ************************************ 00:09:05.360 END TEST filesystem_in_capsule_btrfs 00:09:05.360 ************************************ 00:09:05.360 13:39:07 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:05.360 13:39:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:05.360 13:39:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:05.360 13:39:07 -- common/autotest_common.sh@10 -- # set +x 00:09:05.360 ************************************ 00:09:05.360 START TEST filesystem_in_capsule_xfs 00:09:05.360 ************************************ 00:09:05.360 13:39:07 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:09:05.360 13:39:07 -- target/filesystem.sh@18 -- # fstype=xfs 00:09:05.360 13:39:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:05.360 13:39:07 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:05.360 13:39:07 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:09:05.360 13:39:07 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:09:05.360 13:39:07 -- common/autotest_common.sh@904 -- # local i=0 00:09:05.360 13:39:07 -- common/autotest_common.sh@905 -- # local force 00:09:05.360 13:39:07 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:09:05.360 13:39:07 -- common/autotest_common.sh@910 -- # force=-f 00:09:05.360 13:39:07 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:05.619 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:05.619 = sectsz=512 attr=2, projid32bit=1 00:09:05.619 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:05.619 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:05.619 data = bsize=4096 blocks=130560, imaxpct=25 00:09:05.619 = sunit=0 swidth=0 blks 00:09:05.619 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:05.619 log =internal log bsize=4096 blocks=16384, version=2 00:09:05.619 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:05.619 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:06.187 Discarding blocks...Done. 00:09:06.187 13:39:08 -- common/autotest_common.sh@921 -- # return 0 00:09:06.187 13:39:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:08.809 13:39:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:08.809 13:39:11 -- target/filesystem.sh@25 -- # sync 00:09:08.809 13:39:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:08.809 13:39:11 -- target/filesystem.sh@27 -- # sync 00:09:08.809 13:39:11 -- target/filesystem.sh@29 -- # i=0 00:09:08.809 13:39:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:08.809 13:39:11 -- target/filesystem.sh@37 -- # kill -0 1451669 00:09:08.809 13:39:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:08.809 13:39:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:08.809 13:39:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:08.809 13:39:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:08.809 00:09:08.809 real 0m3.352s 00:09:08.809 user 0m0.020s 00:09:08.809 sys 0m0.075s 00:09:08.809 13:39:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.809 13:39:11 -- common/autotest_common.sh@10 -- # set +x 00:09:08.809 ************************************ 00:09:08.809 END TEST filesystem_in_capsule_xfs 00:09:08.809 ************************************ 00:09:08.809 13:39:11 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:08.809 13:39:11 -- target/filesystem.sh@93 -- # sync 00:09:08.809 13:39:11 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.067 13:39:11 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.067 13:39:11 -- common/autotest_common.sh@1198 -- # local i=0 00:09:09.067 13:39:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:09:09.067 13:39:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.067 13:39:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:09.067 13:39:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.067 13:39:11 -- common/autotest_common.sh@1210 -- # return 0 00:09:09.067 13:39:11 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.067 13:39:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.067 13:39:11 -- common/autotest_common.sh@10 -- # set +x 00:09:09.067 13:39:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.067 13:39:11 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:09.067 13:39:11 -- target/filesystem.sh@101 -- # killprocess 1451669 00:09:09.067 13:39:11 -- common/autotest_common.sh@926 -- # '[' -z 1451669 ']' 00:09:09.067 13:39:11 -- common/autotest_common.sh@930 -- # kill -0 1451669 00:09:09.067 13:39:11 -- common/autotest_common.sh@931 -- # uname 00:09:09.067 13:39:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:09.067 13:39:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1451669 00:09:09.067 13:39:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:09.067 13:39:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:09.067 13:39:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1451669' 00:09:09.067 killing process with pid 1451669 00:09:09.067 13:39:11 -- common/autotest_common.sh@945 -- # kill 1451669 00:09:09.067 13:39:11 -- common/autotest_common.sh@950 -- # wait 1451669 00:09:09.326 13:39:11 -- target/filesystem.sh@102 -- # nvmfpid= 00:09:09.326 00:09:09.326 real 0m11.664s 00:09:09.326 user 0m45.843s 00:09:09.326 sys 0m1.170s 00:09:09.326 13:39:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.326 13:39:11 -- common/autotest_common.sh@10 -- # set +x 00:09:09.326 ************************************ 00:09:09.326 END TEST nvmf_filesystem_in_capsule 00:09:09.326 ************************************ 00:09:09.326 13:39:11 -- target/filesystem.sh@108 -- # nvmftestfini 00:09:09.326 13:39:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:09.326 13:39:11 -- nvmf/common.sh@116 -- # sync 00:09:09.326 13:39:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:09.326 13:39:11 -- nvmf/common.sh@119 -- # set +e 00:09:09.326 13:39:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:09.326 13:39:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:09.585 rmmod nvme_tcp 00:09:09.585 rmmod nvme_fabrics 00:09:09.585 rmmod nvme_keyring 00:09:09.585 13:39:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:09.585 13:39:11 -- nvmf/common.sh@123 -- # set -e 00:09:09.585 13:39:11 -- nvmf/common.sh@124 -- # return 0 00:09:09.585 13:39:11 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:09:09.585 13:39:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:09.585 13:39:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:09.585 13:39:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:09.585 13:39:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:09.585 13:39:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:09.585 13:39:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.585 13:39:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.585 13:39:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.490 13:39:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:09:11.490 00:09:11.490 real 0m30.300s 00:09:11.490 user 1m30.111s 00:09:11.490 sys 0m6.454s 00:09:11.490 13:39:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.490 13:39:13 -- common/autotest_common.sh@10 -- # set +x 00:09:11.490 ************************************ 00:09:11.490 END TEST nvmf_filesystem 00:09:11.490 ************************************ 00:09:11.490 13:39:13 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:11.490 13:39:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:11.490 13:39:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.490 13:39:13 -- common/autotest_common.sh@10 -- # set +x 00:09:11.490 ************************************ 00:09:11.490 START TEST nvmf_discovery 00:09:11.490 ************************************ 00:09:11.490 13:39:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:11.750 * Looking for test storage... 00:09:11.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.750 13:39:14 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.750 13:39:14 -- nvmf/common.sh@7 -- # uname -s 00:09:11.750 13:39:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.750 13:39:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.750 13:39:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.750 13:39:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.750 13:39:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.750 13:39:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.750 13:39:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.750 13:39:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.750 13:39:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.750 13:39:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.750 13:39:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:11.750 13:39:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:11.750 13:39:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.750 13:39:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.750 13:39:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.750 13:39:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.750 13:39:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.750 13:39:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.750 13:39:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.750 13:39:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.750 13:39:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.750 13:39:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.750 13:39:14 -- paths/export.sh@5 -- # export PATH 00:09:11.750 13:39:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.750 13:39:14 -- nvmf/common.sh@46 -- # : 0 00:09:11.750 13:39:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:11.750 13:39:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:11.750 13:39:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:11.750 13:39:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.750 13:39:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.750 13:39:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:11.750 13:39:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:11.750 13:39:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:11.750 13:39:14 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:11.750 13:39:14 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:11.750 13:39:14 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:11.750 13:39:14 -- target/discovery.sh@15 -- # hash nvme 00:09:11.750 13:39:14 -- target/discovery.sh@20 -- # nvmftestinit 00:09:11.750 13:39:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:11.750 13:39:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.750 13:39:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:11.750 13:39:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:11.750 13:39:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:11.750 13:39:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.750 13:39:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.750 13:39:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.750 13:39:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:11.750 13:39:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:11.750 13:39:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:11.750 13:39:14 -- common/autotest_common.sh@10 -- # set +x 00:09:17.028 13:39:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:17.028 13:39:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:17.028 13:39:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:17.028 13:39:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:17.028 13:39:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:17.028 13:39:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:17.028 13:39:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:17.028 13:39:19 -- nvmf/common.sh@294 -- # net_devs=() 00:09:17.028 13:39:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:17.028 13:39:19 -- nvmf/common.sh@295 -- # e810=() 00:09:17.028 13:39:19 -- nvmf/common.sh@295 -- # local -ga e810 00:09:17.028 13:39:19 -- nvmf/common.sh@296 -- # x722=() 00:09:17.028 13:39:19 -- nvmf/common.sh@296 -- # local -ga x722 00:09:17.028 13:39:19 -- nvmf/common.sh@297 -- # mlx=() 00:09:17.028 13:39:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:17.028 13:39:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.028 13:39:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.028 13:39:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.028 13:39:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.028 13:39:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.028 13:39:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.028 13:39:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.028 13:39:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.028 13:39:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.028 13:39:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.028 13:39:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.028 13:39:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:17.028 13:39:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:17.028 13:39:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:17.028 13:39:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:17.028 13:39:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:17.028 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:17.028 13:39:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:17.028 13:39:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:17.028 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:17.028 13:39:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:17.028 13:39:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:17.028 13:39:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:17.028 13:39:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.028 13:39:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:17.029 13:39:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.029 13:39:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:17.029 Found net devices under 0000:86:00.0: cvl_0_0 00:09:17.029 13:39:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.029 13:39:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:17.029 13:39:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.029 13:39:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:17.029 13:39:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.029 13:39:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:17.029 Found net devices under 0000:86:00.1: cvl_0_1 00:09:17.029 13:39:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.029 13:39:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:17.029 13:39:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:17.029 13:39:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:17.029 13:39:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:17.029 13:39:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:17.029 13:39:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.029 13:39:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.029 13:39:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.029 13:39:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:17.029 13:39:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.029 13:39:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.029 13:39:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:17.029 13:39:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.029 13:39:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.029 13:39:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:17.029 13:39:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:17.029 13:39:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.029 13:39:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.029 13:39:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.029 13:39:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.029 13:39:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:17.029 13:39:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.292 13:39:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.292 13:39:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.292 13:39:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:17.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:09:17.292 00:09:17.292 --- 10.0.0.2 ping statistics --- 00:09:17.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.292 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:09:17.292 13:39:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:09:17.292 00:09:17.292 --- 10.0.0.1 ping statistics --- 00:09:17.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.292 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:09:17.292 13:39:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.292 13:39:19 -- nvmf/common.sh@410 -- # return 0 00:09:17.292 13:39:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:17.292 13:39:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.292 13:39:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:17.292 13:39:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:17.292 13:39:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.292 13:39:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:17.292 13:39:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:17.292 13:39:19 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:17.292 13:39:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:17.292 13:39:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:17.292 13:39:19 -- common/autotest_common.sh@10 -- # set +x 00:09:17.292 13:39:19 -- nvmf/common.sh@469 -- # nvmfpid=1457795 00:09:17.292 13:39:19 -- nvmf/common.sh@470 -- # waitforlisten 1457795 00:09:17.292 13:39:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.292 13:39:19 -- common/autotest_common.sh@819 -- # '[' -z 1457795 ']' 00:09:17.292 13:39:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.292 13:39:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:17.292 13:39:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.292 13:39:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:17.292 13:39:19 -- common/autotest_common.sh@10 -- # set +x 00:09:17.292 [2024-07-11 13:39:19.616849] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:17.292 [2024-07-11 13:39:19.616891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.292 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.292 [2024-07-11 13:39:19.678432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.292 [2024-07-11 13:39:19.718767] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:17.292 [2024-07-11 13:39:19.718879] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.292 [2024-07-11 13:39:19.718888] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.292 [2024-07-11 13:39:19.718895] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.292 [2024-07-11 13:39:19.718934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.292 [2024-07-11 13:39:19.718951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.292 [2024-07-11 13:39:19.719054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.292 [2024-07-11 13:39:19.719052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.233 13:39:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:18.233 13:39:20 -- common/autotest_common.sh@852 -- # return 0 00:09:18.233 13:39:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:18.233 13:39:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:18.233 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.233 13:39:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.233 13:39:20 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.233 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.233 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.233 [2024-07-11 13:39:20.461581] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.233 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.233 13:39:20 -- target/discovery.sh@26 -- # seq 1 4 00:09:18.233 13:39:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:18.233 13:39:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:18.233 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.233 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.233 Null1 00:09:18.233 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 [2024-07-11 13:39:20.507110] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:18.234 13:39:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 Null2 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:18.234 13:39:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 Null3 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:18.234 13:39:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 Null4 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:18.234 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.234 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.234 13:39:20 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:09:18.493 00:09:18.493 Discovery Log Number of Records 6, Generation counter 6 00:09:18.493 =====Discovery Log Entry 0====== 00:09:18.493 trtype: tcp 00:09:18.493 adrfam: ipv4 00:09:18.493 subtype: current discovery subsystem 00:09:18.493 treq: not required 00:09:18.493 portid: 0 00:09:18.493 trsvcid: 4420 00:09:18.493 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:18.493 traddr: 10.0.0.2 00:09:18.493 eflags: explicit discovery connections, duplicate discovery information 00:09:18.493 sectype: none 00:09:18.493 =====Discovery Log Entry 1====== 00:09:18.493 trtype: tcp 00:09:18.493 adrfam: ipv4 00:09:18.493 subtype: nvme subsystem 00:09:18.493 treq: not required 00:09:18.493 portid: 0 00:09:18.493 trsvcid: 4420 00:09:18.493 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:18.493 traddr: 10.0.0.2 00:09:18.493 eflags: none 00:09:18.493 sectype: none 00:09:18.493 =====Discovery Log Entry 2====== 00:09:18.493 trtype: tcp 00:09:18.493 adrfam: ipv4 00:09:18.493 subtype: nvme subsystem 00:09:18.493 treq: not required 00:09:18.493 portid: 0 00:09:18.493 trsvcid: 4420 00:09:18.493 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:18.493 traddr: 10.0.0.2 00:09:18.493 eflags: none 00:09:18.493 sectype: none 00:09:18.493 =====Discovery Log Entry 3====== 00:09:18.493 trtype: tcp 00:09:18.493 adrfam: ipv4 00:09:18.493 subtype: nvme subsystem 00:09:18.493 treq: not required 00:09:18.493 portid: 0 00:09:18.493 trsvcid: 4420 00:09:18.493 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:18.493 traddr: 10.0.0.2 00:09:18.493 eflags: none 00:09:18.493 sectype: none 00:09:18.493 =====Discovery Log Entry 4====== 00:09:18.493 trtype: tcp 00:09:18.493 adrfam: ipv4 00:09:18.493 subtype: nvme subsystem 00:09:18.493 treq: not required 00:09:18.493 portid: 0 00:09:18.493 trsvcid: 4420 00:09:18.493 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:18.493 traddr: 10.0.0.2 00:09:18.493 eflags: none 00:09:18.493 sectype: none 00:09:18.493 =====Discovery Log Entry 5====== 00:09:18.493 trtype: tcp 00:09:18.493 adrfam: ipv4 00:09:18.493 subtype: discovery subsystem referral 00:09:18.493 treq: not required 00:09:18.493 portid: 0 00:09:18.493 trsvcid: 4430 00:09:18.493 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:18.493 traddr: 10.0.0.2 00:09:18.493 eflags: none 00:09:18.493 sectype: none 00:09:18.493 13:39:20 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:18.493 Perform nvmf subsystem discovery via RPC 00:09:18.493 13:39:20 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:18.493 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.493 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.493 [2024-07-11 13:39:20.751806] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:18.493 [ 00:09:18.493 { 00:09:18.493 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:18.493 "subtype": "Discovery", 00:09:18.493 "listen_addresses": [ 00:09:18.493 { 00:09:18.493 "transport": "TCP", 00:09:18.493 "trtype": "TCP", 00:09:18.493 "adrfam": "IPv4", 00:09:18.493 "traddr": "10.0.0.2", 00:09:18.493 "trsvcid": "4420" 00:09:18.493 } 00:09:18.493 ], 00:09:18.493 "allow_any_host": true, 00:09:18.493 "hosts": [] 00:09:18.493 }, 00:09:18.493 { 00:09:18.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:18.493 "subtype": "NVMe", 00:09:18.493 "listen_addresses": [ 00:09:18.493 { 00:09:18.493 "transport": "TCP", 00:09:18.493 "trtype": "TCP", 00:09:18.493 "adrfam": "IPv4", 00:09:18.494 "traddr": "10.0.0.2", 00:09:18.494 "trsvcid": "4420" 00:09:18.494 } 00:09:18.494 ], 00:09:18.494 "allow_any_host": true, 00:09:18.494 "hosts": [], 00:09:18.494 "serial_number": "SPDK00000000000001", 00:09:18.494 "model_number": "SPDK bdev Controller", 00:09:18.494 "max_namespaces": 32, 00:09:18.494 "min_cntlid": 1, 00:09:18.494 "max_cntlid": 65519, 00:09:18.494 "namespaces": [ 00:09:18.494 { 00:09:18.494 "nsid": 1, 00:09:18.494 "bdev_name": "Null1", 00:09:18.494 "name": "Null1", 00:09:18.494 "nguid": "4F51B35C28BA428B9F279B1A1C91CD2F", 00:09:18.494 "uuid": "4f51b35c-28ba-428b-9f27-9b1a1c91cd2f" 00:09:18.494 } 00:09:18.494 ] 00:09:18.494 }, 00:09:18.494 { 00:09:18.494 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:18.494 "subtype": "NVMe", 00:09:18.494 "listen_addresses": [ 00:09:18.494 { 00:09:18.494 "transport": "TCP", 00:09:18.494 "trtype": "TCP", 00:09:18.494 "adrfam": "IPv4", 00:09:18.494 "traddr": "10.0.0.2", 00:09:18.494 "trsvcid": "4420" 00:09:18.494 } 00:09:18.494 ], 00:09:18.494 "allow_any_host": true, 00:09:18.494 "hosts": [], 00:09:18.494 "serial_number": "SPDK00000000000002", 00:09:18.494 "model_number": "SPDK bdev Controller", 00:09:18.494 "max_namespaces": 32, 00:09:18.494 "min_cntlid": 1, 00:09:18.494 "max_cntlid": 65519, 00:09:18.494 "namespaces": [ 00:09:18.494 { 00:09:18.494 "nsid": 1, 00:09:18.494 "bdev_name": "Null2", 00:09:18.494 "name": "Null2", 00:09:18.494 "nguid": "D79C80B26E2D4730988D902E99FF7731", 00:09:18.494 "uuid": "d79c80b2-6e2d-4730-988d-902e99ff7731" 00:09:18.494 } 00:09:18.494 ] 00:09:18.494 }, 00:09:18.494 { 00:09:18.494 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:18.494 "subtype": "NVMe", 00:09:18.494 "listen_addresses": [ 00:09:18.494 { 00:09:18.494 "transport": "TCP", 00:09:18.494 "trtype": "TCP", 00:09:18.494 "adrfam": "IPv4", 00:09:18.494 "traddr": "10.0.0.2", 00:09:18.494 "trsvcid": "4420" 00:09:18.494 } 00:09:18.494 ], 00:09:18.494 "allow_any_host": true, 00:09:18.494 "hosts": [], 00:09:18.494 "serial_number": "SPDK00000000000003", 00:09:18.494 "model_number": "SPDK bdev Controller", 00:09:18.494 "max_namespaces": 32, 00:09:18.494 "min_cntlid": 1, 00:09:18.494 "max_cntlid": 65519, 00:09:18.494 "namespaces": [ 00:09:18.494 { 00:09:18.494 "nsid": 1, 00:09:18.494 "bdev_name": "Null3", 00:09:18.494 "name": "Null3", 00:09:18.494 "nguid": "0E51DFD0BC9941ABA8C4A6921678BEAB", 00:09:18.494 "uuid": "0e51dfd0-bc99-41ab-a8c4-a6921678beab" 00:09:18.494 } 00:09:18.494 ] 00:09:18.494 }, 00:09:18.494 { 00:09:18.494 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:18.494 "subtype": "NVMe", 00:09:18.494 "listen_addresses": [ 00:09:18.494 { 00:09:18.494 "transport": "TCP", 00:09:18.494 "trtype": "TCP", 00:09:18.494 "adrfam": "IPv4", 00:09:18.494 "traddr": "10.0.0.2", 00:09:18.494 "trsvcid": "4420" 00:09:18.494 } 00:09:18.494 ], 00:09:18.494 "allow_any_host": true, 00:09:18.494 "hosts": [], 00:09:18.494 "serial_number": "SPDK00000000000004", 00:09:18.494 "model_number": "SPDK bdev Controller", 00:09:18.494 "max_namespaces": 32, 00:09:18.494 "min_cntlid": 1, 00:09:18.494 "max_cntlid": 65519, 00:09:18.494 "namespaces": [ 00:09:18.494 { 00:09:18.494 "nsid": 1, 00:09:18.494 "bdev_name": "Null4", 00:09:18.494 "name": "Null4", 00:09:18.494 "nguid": "1FAB9CF139F3450CB1E831E443EED7D1", 00:09:18.494 "uuid": "1fab9cf1-39f3-450c-b1e8-31e443eed7d1" 00:09:18.494 } 00:09:18.494 ] 00:09:18.494 } 00:09:18.494 ] 00:09:18.494 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.494 13:39:20 -- target/discovery.sh@42 -- # seq 1 4 00:09:18.494 13:39:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:18.494 13:39:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.494 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.494 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.494 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.494 13:39:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:18.494 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.494 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.494 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.494 13:39:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:18.494 13:39:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:18.494 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.494 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.494 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.494 13:39:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:18.494 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.494 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.494 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.494 13:39:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:18.494 13:39:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:18.494 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.494 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.494 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.494 13:39:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:18.494 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.494 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.494 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.494 13:39:20 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:18.494 13:39:20 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:18.494 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.494 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.494 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.494 13:39:20 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:18.494 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.494 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.494 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.494 13:39:20 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:18.494 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.494 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.494 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.494 13:39:20 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:18.494 13:39:20 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:18.494 13:39:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.494 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:09:18.494 13:39:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.494 13:39:20 -- target/discovery.sh@49 -- # check_bdevs= 00:09:18.494 13:39:20 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:18.494 13:39:20 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:18.494 13:39:20 -- target/discovery.sh@57 -- # nvmftestfini 00:09:18.494 13:39:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:18.494 13:39:20 -- nvmf/common.sh@116 -- # sync 00:09:18.494 13:39:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:18.494 13:39:20 -- nvmf/common.sh@119 -- # set +e 00:09:18.494 13:39:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:18.494 13:39:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:18.494 rmmod nvme_tcp 00:09:18.494 rmmod nvme_fabrics 00:09:18.494 rmmod nvme_keyring 00:09:18.494 13:39:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:18.494 13:39:20 -- nvmf/common.sh@123 -- # set -e 00:09:18.494 13:39:20 -- nvmf/common.sh@124 -- # return 0 00:09:18.494 13:39:20 -- nvmf/common.sh@477 -- # '[' -n 1457795 ']' 00:09:18.494 13:39:20 -- nvmf/common.sh@478 -- # killprocess 1457795 00:09:18.494 13:39:20 -- common/autotest_common.sh@926 -- # '[' -z 1457795 ']' 00:09:18.494 13:39:20 -- common/autotest_common.sh@930 -- # kill -0 1457795 00:09:18.494 13:39:20 -- common/autotest_common.sh@931 -- # uname 00:09:18.754 13:39:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:18.754 13:39:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1457795 00:09:18.754 13:39:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:18.754 13:39:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:18.754 13:39:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1457795' 00:09:18.754 killing process with pid 1457795 00:09:18.754 13:39:20 -- common/autotest_common.sh@945 -- # kill 1457795 00:09:18.754 [2024-07-11 13:39:20.993729] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:09:18.754 13:39:20 -- common/autotest_common.sh@950 -- # wait 1457795 00:09:18.754 13:39:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:18.754 13:39:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:18.754 13:39:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:18.754 13:39:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.754 13:39:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:18.754 13:39:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.754 13:39:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.754 13:39:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.294 13:39:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:09:21.294 00:09:21.294 real 0m9.293s 00:09:21.294 user 0m7.389s 00:09:21.294 sys 0m4.549s 00:09:21.294 13:39:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.294 13:39:23 -- common/autotest_common.sh@10 -- # set +x 00:09:21.294 ************************************ 00:09:21.294 END TEST nvmf_discovery 00:09:21.294 ************************************ 00:09:21.294 13:39:23 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:21.294 13:39:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:21.294 13:39:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.294 13:39:23 -- common/autotest_common.sh@10 -- # set +x 00:09:21.294 ************************************ 00:09:21.294 START TEST nvmf_referrals 00:09:21.294 ************************************ 00:09:21.294 13:39:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:21.294 * Looking for test storage... 00:09:21.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.294 13:39:23 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.294 13:39:23 -- nvmf/common.sh@7 -- # uname -s 00:09:21.294 13:39:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.294 13:39:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.294 13:39:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.294 13:39:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.294 13:39:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.294 13:39:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.294 13:39:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.294 13:39:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.294 13:39:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.294 13:39:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.294 13:39:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:21.294 13:39:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:21.294 13:39:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.294 13:39:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.294 13:39:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.294 13:39:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.294 13:39:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.294 13:39:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.294 13:39:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.294 13:39:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.294 13:39:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.294 13:39:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.294 13:39:23 -- paths/export.sh@5 -- # export PATH 00:09:21.294 13:39:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.294 13:39:23 -- nvmf/common.sh@46 -- # : 0 00:09:21.294 13:39:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:21.294 13:39:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:21.294 13:39:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:21.294 13:39:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.294 13:39:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.294 13:39:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:21.294 13:39:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:21.294 13:39:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:21.294 13:39:23 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:21.294 13:39:23 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:21.294 13:39:23 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:21.294 13:39:23 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:21.294 13:39:23 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:21.295 13:39:23 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:21.295 13:39:23 -- target/referrals.sh@37 -- # nvmftestinit 00:09:21.295 13:39:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:21.295 13:39:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.295 13:39:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:21.295 13:39:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:21.295 13:39:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:21.295 13:39:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.295 13:39:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.295 13:39:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.295 13:39:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:21.295 13:39:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:21.295 13:39:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:21.295 13:39:23 -- common/autotest_common.sh@10 -- # set +x 00:09:26.567 13:39:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:26.567 13:39:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:26.567 13:39:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:26.567 13:39:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:26.567 13:39:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:26.567 13:39:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:26.567 13:39:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:26.567 13:39:27 -- nvmf/common.sh@294 -- # net_devs=() 00:09:26.567 13:39:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:26.567 13:39:27 -- nvmf/common.sh@295 -- # e810=() 00:09:26.567 13:39:27 -- nvmf/common.sh@295 -- # local -ga e810 00:09:26.567 13:39:27 -- nvmf/common.sh@296 -- # x722=() 00:09:26.567 13:39:27 -- nvmf/common.sh@296 -- # local -ga x722 00:09:26.567 13:39:27 -- nvmf/common.sh@297 -- # mlx=() 00:09:26.567 13:39:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:26.567 13:39:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.567 13:39:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.567 13:39:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.567 13:39:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.567 13:39:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.567 13:39:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.567 13:39:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.567 13:39:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.567 13:39:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.567 13:39:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.567 13:39:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.567 13:39:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:26.567 13:39:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:26.567 13:39:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:26.567 13:39:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:26.567 13:39:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:26.567 13:39:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:26.567 13:39:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:26.567 13:39:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:26.567 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:26.567 13:39:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:26.567 13:39:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:26.567 13:39:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.567 13:39:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.567 13:39:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:26.567 13:39:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:26.567 13:39:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:26.567 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:26.567 13:39:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:26.567 13:39:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:26.567 13:39:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.567 13:39:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.567 13:39:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:26.567 13:39:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:26.567 13:39:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:26.567 13:39:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:26.567 13:39:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:26.567 13:39:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.567 13:39:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:26.567 13:39:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.567 13:39:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:26.567 Found net devices under 0000:86:00.0: cvl_0_0 00:09:26.567 13:39:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.567 13:39:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:26.567 13:39:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.567 13:39:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:26.567 13:39:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.567 13:39:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:26.567 Found net devices under 0000:86:00.1: cvl_0_1 00:09:26.567 13:39:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.567 13:39:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:26.567 13:39:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:26.567 13:39:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:26.567 13:39:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:26.567 13:39:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:26.567 13:39:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.567 13:39:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.567 13:39:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.567 13:39:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:26.567 13:39:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.567 13:39:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.567 13:39:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:26.567 13:39:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.567 13:39:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.567 13:39:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:26.567 13:39:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:26.567 13:39:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.567 13:39:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.567 13:39:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.567 13:39:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.567 13:39:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:26.567 13:39:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.567 13:39:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.567 13:39:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.567 13:39:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:26.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:09:26.567 00:09:26.567 --- 10.0.0.2 ping statistics --- 00:09:26.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.567 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:09:26.567 13:39:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:09:26.567 00:09:26.567 --- 10.0.0.1 ping statistics --- 00:09:26.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.567 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:09:26.567 13:39:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.567 13:39:28 -- nvmf/common.sh@410 -- # return 0 00:09:26.567 13:39:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:26.567 13:39:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.567 13:39:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:26.567 13:39:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:26.567 13:39:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.567 13:39:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:26.567 13:39:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:26.567 13:39:28 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:26.567 13:39:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:26.567 13:39:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:26.567 13:39:28 -- common/autotest_common.sh@10 -- # set +x 00:09:26.567 13:39:28 -- nvmf/common.sh@469 -- # nvmfpid=1461590 00:09:26.567 13:39:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:26.567 13:39:28 -- nvmf/common.sh@470 -- # waitforlisten 1461590 00:09:26.567 13:39:28 -- common/autotest_common.sh@819 -- # '[' -z 1461590 ']' 00:09:26.567 13:39:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.567 13:39:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:26.567 13:39:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.568 13:39:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:26.568 13:39:28 -- common/autotest_common.sh@10 -- # set +x 00:09:26.568 [2024-07-11 13:39:28.335859] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:26.568 [2024-07-11 13:39:28.335900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.568 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.568 [2024-07-11 13:39:28.394835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.568 [2024-07-11 13:39:28.432870] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:26.568 [2024-07-11 13:39:28.432986] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.568 [2024-07-11 13:39:28.432994] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.568 [2024-07-11 13:39:28.433006] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.568 [2024-07-11 13:39:28.433097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.568 [2024-07-11 13:39:28.433197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.568 [2024-07-11 13:39:28.433271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.568 [2024-07-11 13:39:28.433273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.827 13:39:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:26.827 13:39:29 -- common/autotest_common.sh@852 -- # return 0 00:09:26.827 13:39:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:26.827 13:39:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:26.827 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:26.827 13:39:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.827 13:39:29 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.827 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.827 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:26.827 [2024-07-11 13:39:29.174575] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.827 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.827 13:39:29 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:26.827 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.827 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:26.827 [2024-07-11 13:39:29.187980] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:26.827 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.827 13:39:29 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:26.827 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.827 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:26.827 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.827 13:39:29 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:26.827 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.827 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:26.827 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.827 13:39:29 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:26.827 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.827 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:26.827 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.827 13:39:29 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:26.827 13:39:29 -- target/referrals.sh@48 -- # jq length 00:09:26.827 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.827 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:26.827 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:26.827 13:39:29 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:26.827 13:39:29 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:26.827 13:39:29 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:26.827 13:39:29 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:26.827 13:39:29 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:26.827 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:26.827 13:39:29 -- target/referrals.sh@21 -- # sort 00:09:26.827 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:26.827 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.087 13:39:29 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:27.087 13:39:29 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:27.087 13:39:29 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:27.087 13:39:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:27.087 13:39:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:27.087 13:39:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:27.087 13:39:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:27.087 13:39:29 -- target/referrals.sh@26 -- # sort 00:09:27.087 13:39:29 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:27.087 13:39:29 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:27.087 13:39:29 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:27.087 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.087 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:27.087 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.087 13:39:29 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:27.087 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.087 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:27.087 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.087 13:39:29 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:27.087 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.087 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:27.087 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.087 13:39:29 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:27.087 13:39:29 -- target/referrals.sh@56 -- # jq length 00:09:27.087 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.087 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:27.087 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.087 13:39:29 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:27.087 13:39:29 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:27.087 13:39:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:27.087 13:39:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:27.087 13:39:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:27.087 13:39:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:27.087 13:39:29 -- target/referrals.sh@26 -- # sort 00:09:27.346 13:39:29 -- target/referrals.sh@26 -- # echo 00:09:27.346 13:39:29 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:27.346 13:39:29 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:27.346 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.346 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:27.346 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.346 13:39:29 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:27.346 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.346 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:27.346 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.346 13:39:29 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:27.346 13:39:29 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:27.346 13:39:29 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:27.346 13:39:29 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:27.346 13:39:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.346 13:39:29 -- target/referrals.sh@21 -- # sort 00:09:27.346 13:39:29 -- common/autotest_common.sh@10 -- # set +x 00:09:27.346 13:39:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.346 13:39:29 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:27.346 13:39:29 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:27.346 13:39:29 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:27.346 13:39:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:27.346 13:39:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:27.346 13:39:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:27.346 13:39:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:27.346 13:39:29 -- target/referrals.sh@26 -- # sort 00:09:27.346 13:39:29 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:27.346 13:39:29 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:27.346 13:39:29 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:27.346 13:39:29 -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:27.346 13:39:29 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:27.346 13:39:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:27.346 13:39:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:27.605 13:39:29 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:27.605 13:39:29 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:27.605 13:39:29 -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:27.605 13:39:29 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:27.605 13:39:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:27.605 13:39:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:27.864 13:39:30 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:27.864 13:39:30 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:27.864 13:39:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.864 13:39:30 -- common/autotest_common.sh@10 -- # set +x 00:09:27.864 13:39:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.864 13:39:30 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:27.864 13:39:30 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:27.864 13:39:30 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:27.864 13:39:30 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:27.864 13:39:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:27.864 13:39:30 -- target/referrals.sh@21 -- # sort 00:09:27.864 13:39:30 -- common/autotest_common.sh@10 -- # set +x 00:09:27.864 13:39:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:27.864 13:39:30 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:27.864 13:39:30 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:27.864 13:39:30 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:27.864 13:39:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:27.864 13:39:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:27.864 13:39:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:27.864 13:39:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:27.864 13:39:30 -- target/referrals.sh@26 -- # sort 00:09:27.864 13:39:30 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:27.864 13:39:30 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:27.864 13:39:30 -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:27.864 13:39:30 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:27.864 13:39:30 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:27.864 13:39:30 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:27.864 13:39:30 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:28.123 13:39:30 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:28.123 13:39:30 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:28.123 13:39:30 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:28.123 13:39:30 -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:28.123 13:39:30 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:28.123 13:39:30 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:28.382 13:39:30 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:28.382 13:39:30 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:28.382 13:39:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.382 13:39:30 -- common/autotest_common.sh@10 -- # set +x 00:09:28.382 13:39:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.382 13:39:30 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:28.382 13:39:30 -- target/referrals.sh@82 -- # jq length 00:09:28.382 13:39:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.382 13:39:30 -- common/autotest_common.sh@10 -- # set +x 00:09:28.382 13:39:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.383 13:39:30 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:28.383 13:39:30 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:28.383 13:39:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:28.383 13:39:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:28.383 13:39:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:28.383 13:39:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:28.383 13:39:30 -- target/referrals.sh@26 -- # sort 00:09:28.383 13:39:30 -- target/referrals.sh@26 -- # echo 00:09:28.383 13:39:30 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:28.383 13:39:30 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:28.383 13:39:30 -- target/referrals.sh@86 -- # nvmftestfini 00:09:28.383 13:39:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:28.383 13:39:30 -- nvmf/common.sh@116 -- # sync 00:09:28.383 13:39:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:28.383 13:39:30 -- nvmf/common.sh@119 -- # set +e 00:09:28.383 13:39:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:28.383 13:39:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:28.383 rmmod nvme_tcp 00:09:28.383 rmmod nvme_fabrics 00:09:28.383 rmmod nvme_keyring 00:09:28.642 13:39:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:28.642 13:39:30 -- nvmf/common.sh@123 -- # set -e 00:09:28.642 13:39:30 -- nvmf/common.sh@124 -- # return 0 00:09:28.642 13:39:30 -- nvmf/common.sh@477 -- # '[' -n 1461590 ']' 00:09:28.642 13:39:30 -- nvmf/common.sh@478 -- # killprocess 1461590 00:09:28.642 13:39:30 -- common/autotest_common.sh@926 -- # '[' -z 1461590 ']' 00:09:28.642 13:39:30 -- common/autotest_common.sh@930 -- # kill -0 1461590 00:09:28.642 13:39:30 -- common/autotest_common.sh@931 -- # uname 00:09:28.642 13:39:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:28.642 13:39:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1461590 00:09:28.642 13:39:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:28.642 13:39:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:28.642 13:39:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1461590' 00:09:28.642 killing process with pid 1461590 00:09:28.642 13:39:30 -- common/autotest_common.sh@945 -- # kill 1461590 00:09:28.642 13:39:30 -- common/autotest_common.sh@950 -- # wait 1461590 00:09:28.642 13:39:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:28.642 13:39:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:28.643 13:39:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:28.643 13:39:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.643 13:39:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:28.643 13:39:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.643 13:39:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.643 13:39:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.220 13:39:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:09:31.220 00:09:31.220 real 0m9.862s 00:09:31.220 user 0m12.514s 00:09:31.220 sys 0m4.301s 00:09:31.220 13:39:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.220 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:09:31.220 ************************************ 00:09:31.220 END TEST nvmf_referrals 00:09:31.220 ************************************ 00:09:31.220 13:39:33 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:31.220 13:39:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:31.220 13:39:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:31.220 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:09:31.220 ************************************ 00:09:31.220 START TEST nvmf_connect_disconnect 00:09:31.220 ************************************ 00:09:31.220 13:39:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:31.220 * Looking for test storage... 00:09:31.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.220 13:39:33 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.220 13:39:33 -- nvmf/common.sh@7 -- # uname -s 00:09:31.220 13:39:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.220 13:39:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.220 13:39:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.220 13:39:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.220 13:39:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.220 13:39:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.220 13:39:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.220 13:39:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.220 13:39:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.220 13:39:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.220 13:39:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:31.220 13:39:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:31.220 13:39:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.220 13:39:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.220 13:39:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:31.220 13:39:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.220 13:39:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.220 13:39:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.220 13:39:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.220 13:39:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.220 13:39:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.220 13:39:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.220 13:39:33 -- paths/export.sh@5 -- # export PATH 00:09:31.220 13:39:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.220 13:39:33 -- nvmf/common.sh@46 -- # : 0 00:09:31.220 13:39:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:31.220 13:39:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:31.220 13:39:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:31.220 13:39:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.220 13:39:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.220 13:39:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:31.220 13:39:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:31.220 13:39:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:31.220 13:39:33 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:31.220 13:39:33 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:31.220 13:39:33 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:31.220 13:39:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:31.220 13:39:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.221 13:39:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:31.221 13:39:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:31.221 13:39:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:31.221 13:39:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.221 13:39:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.221 13:39:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.221 13:39:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:31.221 13:39:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:31.221 13:39:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:31.221 13:39:33 -- common/autotest_common.sh@10 -- # set +x 00:09:36.497 13:39:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:36.497 13:39:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:36.497 13:39:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:36.497 13:39:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:36.497 13:39:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:36.497 13:39:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:36.497 13:39:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:36.497 13:39:38 -- nvmf/common.sh@294 -- # net_devs=() 00:09:36.497 13:39:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:36.497 13:39:38 -- nvmf/common.sh@295 -- # e810=() 00:09:36.497 13:39:38 -- nvmf/common.sh@295 -- # local -ga e810 00:09:36.497 13:39:38 -- nvmf/common.sh@296 -- # x722=() 00:09:36.497 13:39:38 -- nvmf/common.sh@296 -- # local -ga x722 00:09:36.497 13:39:38 -- nvmf/common.sh@297 -- # mlx=() 00:09:36.497 13:39:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:36.497 13:39:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.497 13:39:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.497 13:39:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.497 13:39:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.497 13:39:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.497 13:39:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.497 13:39:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.497 13:39:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.497 13:39:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.497 13:39:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.497 13:39:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.497 13:39:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:36.497 13:39:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:36.497 13:39:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:36.497 13:39:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:36.497 13:39:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:36.497 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:36.497 13:39:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:36.497 13:39:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:36.497 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:36.497 13:39:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:36.497 13:39:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:36.497 13:39:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:36.497 13:39:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.497 13:39:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:36.497 13:39:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.497 13:39:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:36.497 Found net devices under 0000:86:00.0: cvl_0_0 00:09:36.497 13:39:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.497 13:39:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:36.497 13:39:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.497 13:39:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:36.497 13:39:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.498 13:39:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:36.498 Found net devices under 0000:86:00.1: cvl_0_1 00:09:36.498 13:39:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.498 13:39:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:36.498 13:39:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:36.498 13:39:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:36.498 13:39:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:36.498 13:39:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:36.498 13:39:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.498 13:39:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.498 13:39:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.498 13:39:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:36.498 13:39:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.498 13:39:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.498 13:39:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:36.498 13:39:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.498 13:39:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.498 13:39:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:36.498 13:39:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:36.498 13:39:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.498 13:39:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.498 13:39:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.498 13:39:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.498 13:39:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:36.498 13:39:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.498 13:39:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.498 13:39:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.498 13:39:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:36.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:09:36.498 00:09:36.498 --- 10.0.0.2 ping statistics --- 00:09:36.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.498 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:36.498 13:39:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:09:36.498 00:09:36.498 --- 10.0.0.1 ping statistics --- 00:09:36.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.498 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:09:36.498 13:39:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.498 13:39:38 -- nvmf/common.sh@410 -- # return 0 00:09:36.498 13:39:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:36.498 13:39:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.498 13:39:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:36.498 13:39:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:36.498 13:39:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.498 13:39:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:36.498 13:39:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:36.498 13:39:38 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:36.498 13:39:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:36.498 13:39:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:36.498 13:39:38 -- common/autotest_common.sh@10 -- # set +x 00:09:36.498 13:39:38 -- nvmf/common.sh@469 -- # nvmfpid=1465479 00:09:36.498 13:39:38 -- nvmf/common.sh@470 -- # waitforlisten 1465479 00:09:36.498 13:39:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:36.498 13:39:38 -- common/autotest_common.sh@819 -- # '[' -z 1465479 ']' 00:09:36.498 13:39:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.498 13:39:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:36.498 13:39:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.498 13:39:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:36.498 13:39:38 -- common/autotest_common.sh@10 -- # set +x 00:09:36.498 [2024-07-11 13:39:38.825926] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:36.498 [2024-07-11 13:39:38.825968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.498 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.498 [2024-07-11 13:39:38.885459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.498 [2024-07-11 13:39:38.923398] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:36.498 [2024-07-11 13:39:38.923528] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.498 [2024-07-11 13:39:38.923538] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.498 [2024-07-11 13:39:38.923544] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.498 [2024-07-11 13:39:38.923591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.498 [2024-07-11 13:39:38.923686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.498 [2024-07-11 13:39:38.923754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.498 [2024-07-11 13:39:38.923755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.432 13:39:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:37.432 13:39:39 -- common/autotest_common.sh@852 -- # return 0 00:09:37.432 13:39:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:37.432 13:39:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:37.432 13:39:39 -- common/autotest_common.sh@10 -- # set +x 00:09:37.432 13:39:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.432 13:39:39 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:37.432 13:39:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:37.432 13:39:39 -- common/autotest_common.sh@10 -- # set +x 00:09:37.432 [2024-07-11 13:39:39.670577] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.432 13:39:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:37.432 13:39:39 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:37.432 13:39:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:37.432 13:39:39 -- common/autotest_common.sh@10 -- # set +x 00:09:37.432 13:39:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:37.432 13:39:39 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:37.432 13:39:39 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:37.432 13:39:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:37.432 13:39:39 -- common/autotest_common.sh@10 -- # set +x 00:09:37.432 13:39:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:37.432 13:39:39 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.432 13:39:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:37.432 13:39:39 -- common/autotest_common.sh@10 -- # set +x 00:09:37.432 13:39:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:37.432 13:39:39 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.432 13:39:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:37.432 13:39:39 -- common/autotest_common.sh@10 -- # set +x 00:09:37.432 [2024-07-11 13:39:39.722486] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.432 13:39:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:37.432 13:39:39 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:37.432 13:39:39 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:37.432 13:39:39 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:37.432 13:39:39 -- target/connect_disconnect.sh@34 -- # set +x 00:09:40.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.063 13:43:29 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:27.063 13:43:29 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:27.063 13:43:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:27.063 13:43:29 -- nvmf/common.sh@116 -- # sync 00:13:27.063 13:43:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:27.063 13:43:29 -- nvmf/common.sh@119 -- # set +e 00:13:27.063 13:43:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:27.063 13:43:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:27.063 rmmod nvme_tcp 00:13:27.063 rmmod nvme_fabrics 00:13:27.063 rmmod nvme_keyring 00:13:27.063 13:43:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:27.063 13:43:29 -- nvmf/common.sh@123 -- # set -e 00:13:27.063 13:43:29 -- nvmf/common.sh@124 -- # return 0 00:13:27.063 13:43:29 -- nvmf/common.sh@477 -- # '[' -n 1465479 ']' 00:13:27.063 13:43:29 -- nvmf/common.sh@478 -- # killprocess 1465479 00:13:27.063 13:43:29 -- common/autotest_common.sh@926 -- # '[' -z 1465479 ']' 00:13:27.063 13:43:29 -- common/autotest_common.sh@930 -- # kill -0 1465479 00:13:27.063 13:43:29 -- common/autotest_common.sh@931 -- # uname 00:13:27.063 13:43:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:27.063 13:43:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1465479 00:13:27.063 13:43:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:27.063 13:43:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:27.063 13:43:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1465479' 00:13:27.063 killing process with pid 1465479 00:13:27.063 13:43:29 -- common/autotest_common.sh@945 -- # kill 1465479 00:13:27.063 13:43:29 -- common/autotest_common.sh@950 -- # wait 1465479 00:13:27.323 13:43:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:27.323 13:43:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:27.323 13:43:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:27.323 13:43:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:27.323 13:43:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:27.323 13:43:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.323 13:43:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.323 13:43:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.860 13:43:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:29.860 00:13:29.860 real 3m58.567s 00:13:29.860 user 15m15.518s 00:13:29.860 sys 0m20.310s 00:13:29.860 13:43:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.860 13:43:31 -- common/autotest_common.sh@10 -- # set +x 00:13:29.860 ************************************ 00:13:29.860 END TEST nvmf_connect_disconnect 00:13:29.860 ************************************ 00:13:29.860 13:43:31 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:29.860 13:43:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:29.860 13:43:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:29.860 13:43:31 -- common/autotest_common.sh@10 -- # set +x 00:13:29.860 ************************************ 00:13:29.860 START TEST nvmf_multitarget 00:13:29.860 ************************************ 00:13:29.860 13:43:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:29.860 * Looking for test storage... 00:13:29.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.860 13:43:31 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.860 13:43:31 -- nvmf/common.sh@7 -- # uname -s 00:13:29.860 13:43:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.860 13:43:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.860 13:43:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.860 13:43:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.860 13:43:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.860 13:43:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.860 13:43:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.860 13:43:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.860 13:43:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.860 13:43:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.860 13:43:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:29.860 13:43:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:29.860 13:43:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.860 13:43:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.860 13:43:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.860 13:43:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.860 13:43:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.860 13:43:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.860 13:43:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.860 13:43:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.860 13:43:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.860 13:43:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.860 13:43:31 -- paths/export.sh@5 -- # export PATH 00:13:29.860 13:43:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.860 13:43:31 -- nvmf/common.sh@46 -- # : 0 00:13:29.860 13:43:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:29.860 13:43:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:29.860 13:43:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:29.860 13:43:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.860 13:43:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.860 13:43:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:29.860 13:43:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:29.860 13:43:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:29.860 13:43:31 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:29.860 13:43:31 -- target/multitarget.sh@15 -- # nvmftestinit 00:13:29.860 13:43:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:29.860 13:43:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.860 13:43:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:29.860 13:43:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:29.860 13:43:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:29.860 13:43:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.860 13:43:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.860 13:43:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.860 13:43:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:29.860 13:43:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:29.860 13:43:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:29.860 13:43:31 -- common/autotest_common.sh@10 -- # set +x 00:13:35.140 13:43:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:35.140 13:43:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:35.140 13:43:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:35.140 13:43:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:35.140 13:43:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:35.140 13:43:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:35.140 13:43:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:35.140 13:43:36 -- nvmf/common.sh@294 -- # net_devs=() 00:13:35.140 13:43:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:35.140 13:43:36 -- nvmf/common.sh@295 -- # e810=() 00:13:35.140 13:43:36 -- nvmf/common.sh@295 -- # local -ga e810 00:13:35.140 13:43:36 -- nvmf/common.sh@296 -- # x722=() 00:13:35.140 13:43:36 -- nvmf/common.sh@296 -- # local -ga x722 00:13:35.140 13:43:36 -- nvmf/common.sh@297 -- # mlx=() 00:13:35.140 13:43:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:35.140 13:43:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.140 13:43:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.140 13:43:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.140 13:43:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.140 13:43:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.140 13:43:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.140 13:43:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.140 13:43:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.140 13:43:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.140 13:43:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.140 13:43:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.140 13:43:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:35.140 13:43:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:35.140 13:43:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:35.140 13:43:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:35.140 13:43:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:35.140 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:35.140 13:43:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:35.140 13:43:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:35.140 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:35.140 13:43:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:35.140 13:43:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:35.140 13:43:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.140 13:43:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:35.140 13:43:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.140 13:43:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:35.140 Found net devices under 0000:86:00.0: cvl_0_0 00:13:35.140 13:43:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.140 13:43:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:35.140 13:43:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.140 13:43:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:35.140 13:43:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.140 13:43:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:35.140 Found net devices under 0000:86:00.1: cvl_0_1 00:13:35.140 13:43:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.140 13:43:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:35.140 13:43:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:35.140 13:43:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:35.140 13:43:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:35.140 13:43:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.140 13:43:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.141 13:43:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.141 13:43:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:35.141 13:43:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.141 13:43:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.141 13:43:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:35.141 13:43:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.141 13:43:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.141 13:43:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:35.141 13:43:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:35.141 13:43:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.141 13:43:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.141 13:43:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.141 13:43:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.141 13:43:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:35.141 13:43:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.141 13:43:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.141 13:43:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.141 13:43:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:35.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:13:35.141 00:13:35.141 --- 10.0.0.2 ping statistics --- 00:13:35.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.141 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:13:35.141 13:43:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:13:35.141 00:13:35.141 --- 10.0.0.1 ping statistics --- 00:13:35.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.141 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:13:35.141 13:43:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.141 13:43:37 -- nvmf/common.sh@410 -- # return 0 00:13:35.141 13:43:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:35.141 13:43:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.141 13:43:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:35.141 13:43:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:35.141 13:43:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.141 13:43:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:35.141 13:43:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:35.141 13:43:37 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:35.141 13:43:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:35.141 13:43:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:35.141 13:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:35.141 13:43:37 -- nvmf/common.sh@469 -- # nvmfpid=1509851 00:13:35.141 13:43:37 -- nvmf/common.sh@470 -- # waitforlisten 1509851 00:13:35.141 13:43:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.141 13:43:37 -- common/autotest_common.sh@819 -- # '[' -z 1509851 ']' 00:13:35.141 13:43:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.141 13:43:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:35.141 13:43:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.141 13:43:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:35.141 13:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:35.141 [2024-07-11 13:43:37.129325] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:35.141 [2024-07-11 13:43:37.129380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.141 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.141 [2024-07-11 13:43:37.189292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.141 [2024-07-11 13:43:37.227872] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:35.141 [2024-07-11 13:43:37.227992] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.141 [2024-07-11 13:43:37.228000] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.141 [2024-07-11 13:43:37.228006] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.141 [2024-07-11 13:43:37.228051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.141 [2024-07-11 13:43:37.228150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.141 [2024-07-11 13:43:37.228225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.141 [2024-07-11 13:43:37.228227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.710 13:43:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:35.710 13:43:37 -- common/autotest_common.sh@852 -- # return 0 00:13:35.710 13:43:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:35.710 13:43:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:35.710 13:43:37 -- common/autotest_common.sh@10 -- # set +x 00:13:35.710 13:43:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.710 13:43:37 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:35.710 13:43:37 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:35.710 13:43:37 -- target/multitarget.sh@21 -- # jq length 00:13:35.710 13:43:38 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:35.710 13:43:38 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:35.710 "nvmf_tgt_1" 00:13:35.969 13:43:38 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:35.969 "nvmf_tgt_2" 00:13:35.969 13:43:38 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:35.969 13:43:38 -- target/multitarget.sh@28 -- # jq length 00:13:35.969 13:43:38 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:35.969 13:43:38 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:36.228 true 00:13:36.228 13:43:38 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:36.228 true 00:13:36.228 13:43:38 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:36.228 13:43:38 -- target/multitarget.sh@35 -- # jq length 00:13:36.487 13:43:38 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:36.487 13:43:38 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:36.487 13:43:38 -- target/multitarget.sh@41 -- # nvmftestfini 00:13:36.487 13:43:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:36.487 13:43:38 -- nvmf/common.sh@116 -- # sync 00:13:36.487 13:43:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:36.487 13:43:38 -- nvmf/common.sh@119 -- # set +e 00:13:36.487 13:43:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:36.487 13:43:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:36.487 rmmod nvme_tcp 00:13:36.487 rmmod nvme_fabrics 00:13:36.487 rmmod nvme_keyring 00:13:36.487 13:43:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:36.487 13:43:38 -- nvmf/common.sh@123 -- # set -e 00:13:36.487 13:43:38 -- nvmf/common.sh@124 -- # return 0 00:13:36.487 13:43:38 -- nvmf/common.sh@477 -- # '[' -n 1509851 ']' 00:13:36.487 13:43:38 -- nvmf/common.sh@478 -- # killprocess 1509851 00:13:36.487 13:43:38 -- common/autotest_common.sh@926 -- # '[' -z 1509851 ']' 00:13:36.487 13:43:38 -- common/autotest_common.sh@930 -- # kill -0 1509851 00:13:36.487 13:43:38 -- common/autotest_common.sh@931 -- # uname 00:13:36.487 13:43:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:36.487 13:43:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1509851 00:13:36.487 13:43:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:36.487 13:43:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:36.487 13:43:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1509851' 00:13:36.487 killing process with pid 1509851 00:13:36.487 13:43:38 -- common/autotest_common.sh@945 -- # kill 1509851 00:13:36.487 13:43:38 -- common/autotest_common.sh@950 -- # wait 1509851 00:13:36.746 13:43:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:36.746 13:43:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:36.746 13:43:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:36.746 13:43:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.746 13:43:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:36.746 13:43:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.746 13:43:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.746 13:43:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.652 13:43:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:38.652 00:13:38.652 real 0m9.243s 00:13:38.652 user 0m9.019s 00:13:38.652 sys 0m4.313s 00:13:38.652 13:43:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.652 13:43:41 -- common/autotest_common.sh@10 -- # set +x 00:13:38.652 ************************************ 00:13:38.652 END TEST nvmf_multitarget 00:13:38.652 ************************************ 00:13:38.652 13:43:41 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:38.652 13:43:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:38.652 13:43:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.652 13:43:41 -- common/autotest_common.sh@10 -- # set +x 00:13:38.652 ************************************ 00:13:38.652 START TEST nvmf_rpc 00:13:38.652 ************************************ 00:13:38.652 13:43:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:38.912 * Looking for test storage... 00:13:38.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.912 13:43:41 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.912 13:43:41 -- nvmf/common.sh@7 -- # uname -s 00:13:38.912 13:43:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.912 13:43:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.912 13:43:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.912 13:43:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.912 13:43:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.912 13:43:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.912 13:43:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.912 13:43:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.912 13:43:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.912 13:43:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.912 13:43:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:38.912 13:43:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:38.912 13:43:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.912 13:43:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.912 13:43:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.912 13:43:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.912 13:43:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.912 13:43:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.912 13:43:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.912 13:43:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.912 13:43:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.912 13:43:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.912 13:43:41 -- paths/export.sh@5 -- # export PATH 00:13:38.912 13:43:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.912 13:43:41 -- nvmf/common.sh@46 -- # : 0 00:13:38.912 13:43:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:38.912 13:43:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:38.912 13:43:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:38.912 13:43:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.912 13:43:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.912 13:43:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:38.913 13:43:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:38.913 13:43:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:38.913 13:43:41 -- target/rpc.sh@11 -- # loops=5 00:13:38.913 13:43:41 -- target/rpc.sh@23 -- # nvmftestinit 00:13:38.913 13:43:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:38.913 13:43:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.913 13:43:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:38.913 13:43:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:38.913 13:43:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:38.913 13:43:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.913 13:43:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.913 13:43:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.913 13:43:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:38.913 13:43:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:38.913 13:43:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:38.913 13:43:41 -- common/autotest_common.sh@10 -- # set +x 00:13:44.252 13:43:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:44.252 13:43:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:44.252 13:43:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:44.252 13:43:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:44.252 13:43:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:44.252 13:43:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:44.252 13:43:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:44.252 13:43:46 -- nvmf/common.sh@294 -- # net_devs=() 00:13:44.252 13:43:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:44.252 13:43:46 -- nvmf/common.sh@295 -- # e810=() 00:13:44.252 13:43:46 -- nvmf/common.sh@295 -- # local -ga e810 00:13:44.252 13:43:46 -- nvmf/common.sh@296 -- # x722=() 00:13:44.252 13:43:46 -- nvmf/common.sh@296 -- # local -ga x722 00:13:44.252 13:43:46 -- nvmf/common.sh@297 -- # mlx=() 00:13:44.252 13:43:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:44.252 13:43:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.252 13:43:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.252 13:43:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.252 13:43:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.252 13:43:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.252 13:43:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.252 13:43:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.252 13:43:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.252 13:43:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.252 13:43:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.252 13:43:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.252 13:43:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:44.252 13:43:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:44.252 13:43:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:44.252 13:43:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:44.252 13:43:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:44.252 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:44.252 13:43:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:44.252 13:43:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:44.252 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:44.252 13:43:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:44.252 13:43:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:44.252 13:43:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.252 13:43:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:44.252 13:43:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.252 13:43:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:44.252 Found net devices under 0000:86:00.0: cvl_0_0 00:13:44.252 13:43:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.252 13:43:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:44.252 13:43:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.252 13:43:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:44.252 13:43:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.252 13:43:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:44.252 Found net devices under 0000:86:00.1: cvl_0_1 00:13:44.252 13:43:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.252 13:43:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:44.252 13:43:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:44.252 13:43:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:44.252 13:43:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.252 13:43:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.252 13:43:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.252 13:43:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:44.252 13:43:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.252 13:43:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.252 13:43:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:44.252 13:43:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.252 13:43:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.252 13:43:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:44.252 13:43:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:44.252 13:43:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.252 13:43:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.252 13:43:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.252 13:43:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.252 13:43:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:44.252 13:43:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.252 13:43:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.252 13:43:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.252 13:43:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:44.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:13:44.252 00:13:44.252 --- 10.0.0.2 ping statistics --- 00:13:44.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.252 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:13:44.252 13:43:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:13:44.252 00:13:44.252 --- 10.0.0.1 ping statistics --- 00:13:44.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.252 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:44.252 13:43:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.252 13:43:46 -- nvmf/common.sh@410 -- # return 0 00:13:44.252 13:43:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:44.252 13:43:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.252 13:43:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:44.252 13:43:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.252 13:43:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:44.252 13:43:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:44.252 13:43:46 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:44.252 13:43:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:44.252 13:43:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:44.252 13:43:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.252 13:43:46 -- nvmf/common.sh@469 -- # nvmfpid=1513510 00:13:44.252 13:43:46 -- nvmf/common.sh@470 -- # waitforlisten 1513510 00:13:44.252 13:43:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:44.252 13:43:46 -- common/autotest_common.sh@819 -- # '[' -z 1513510 ']' 00:13:44.252 13:43:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.252 13:43:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:44.252 13:43:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.252 13:43:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:44.252 13:43:46 -- common/autotest_common.sh@10 -- # set +x 00:13:44.252 [2024-07-11 13:43:46.479436] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:44.252 [2024-07-11 13:43:46.479480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.252 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.252 [2024-07-11 13:43:46.537990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.252 [2024-07-11 13:43:46.578182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:44.252 [2024-07-11 13:43:46.578300] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.252 [2024-07-11 13:43:46.578309] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.252 [2024-07-11 13:43:46.578316] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.252 [2024-07-11 13:43:46.578355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.252 [2024-07-11 13:43:46.578458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.252 [2024-07-11 13:43:46.578522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.252 [2024-07-11 13:43:46.578524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.191 13:43:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:45.191 13:43:47 -- common/autotest_common.sh@852 -- # return 0 00:13:45.191 13:43:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:45.191 13:43:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:45.191 13:43:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.191 13:43:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.191 13:43:47 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:45.191 13:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.191 13:43:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.191 13:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.191 13:43:47 -- target/rpc.sh@26 -- # stats='{ 00:13:45.191 "tick_rate": 2300000000, 00:13:45.191 "poll_groups": [ 00:13:45.191 { 00:13:45.191 "name": "nvmf_tgt_poll_group_0", 00:13:45.191 "admin_qpairs": 0, 00:13:45.191 "io_qpairs": 0, 00:13:45.191 "current_admin_qpairs": 0, 00:13:45.191 "current_io_qpairs": 0, 00:13:45.191 "pending_bdev_io": 0, 00:13:45.191 "completed_nvme_io": 0, 00:13:45.191 "transports": [] 00:13:45.191 }, 00:13:45.191 { 00:13:45.191 "name": "nvmf_tgt_poll_group_1", 00:13:45.191 "admin_qpairs": 0, 00:13:45.191 "io_qpairs": 0, 00:13:45.191 "current_admin_qpairs": 0, 00:13:45.191 "current_io_qpairs": 0, 00:13:45.191 "pending_bdev_io": 0, 00:13:45.191 "completed_nvme_io": 0, 00:13:45.191 "transports": [] 00:13:45.191 }, 00:13:45.191 { 00:13:45.191 "name": "nvmf_tgt_poll_group_2", 00:13:45.191 "admin_qpairs": 0, 00:13:45.191 "io_qpairs": 0, 00:13:45.191 "current_admin_qpairs": 0, 00:13:45.191 "current_io_qpairs": 0, 00:13:45.191 "pending_bdev_io": 0, 00:13:45.191 "completed_nvme_io": 0, 00:13:45.191 "transports": [] 00:13:45.191 }, 00:13:45.191 { 00:13:45.191 "name": "nvmf_tgt_poll_group_3", 00:13:45.191 "admin_qpairs": 0, 00:13:45.191 "io_qpairs": 0, 00:13:45.191 "current_admin_qpairs": 0, 00:13:45.191 "current_io_qpairs": 0, 00:13:45.191 "pending_bdev_io": 0, 00:13:45.191 "completed_nvme_io": 0, 00:13:45.191 "transports": [] 00:13:45.191 } 00:13:45.191 ] 00:13:45.191 }' 00:13:45.191 13:43:47 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:45.191 13:43:47 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:45.191 13:43:47 -- target/rpc.sh@15 -- # wc -l 00:13:45.191 13:43:47 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:45.191 13:43:47 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:45.191 13:43:47 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:45.191 13:43:47 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:45.191 13:43:47 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.191 13:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.191 13:43:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.191 [2024-07-11 13:43:47.438964] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.191 13:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.191 13:43:47 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:45.191 13:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.191 13:43:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.191 13:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.191 13:43:47 -- target/rpc.sh@33 -- # stats='{ 00:13:45.191 "tick_rate": 2300000000, 00:13:45.191 "poll_groups": [ 00:13:45.191 { 00:13:45.191 "name": "nvmf_tgt_poll_group_0", 00:13:45.191 "admin_qpairs": 0, 00:13:45.191 "io_qpairs": 0, 00:13:45.191 "current_admin_qpairs": 0, 00:13:45.191 "current_io_qpairs": 0, 00:13:45.191 "pending_bdev_io": 0, 00:13:45.191 "completed_nvme_io": 0, 00:13:45.191 "transports": [ 00:13:45.191 { 00:13:45.191 "trtype": "TCP" 00:13:45.191 } 00:13:45.191 ] 00:13:45.191 }, 00:13:45.191 { 00:13:45.191 "name": "nvmf_tgt_poll_group_1", 00:13:45.191 "admin_qpairs": 0, 00:13:45.191 "io_qpairs": 0, 00:13:45.191 "current_admin_qpairs": 0, 00:13:45.191 "current_io_qpairs": 0, 00:13:45.191 "pending_bdev_io": 0, 00:13:45.191 "completed_nvme_io": 0, 00:13:45.191 "transports": [ 00:13:45.191 { 00:13:45.191 "trtype": "TCP" 00:13:45.191 } 00:13:45.191 ] 00:13:45.191 }, 00:13:45.191 { 00:13:45.191 "name": "nvmf_tgt_poll_group_2", 00:13:45.191 "admin_qpairs": 0, 00:13:45.191 "io_qpairs": 0, 00:13:45.191 "current_admin_qpairs": 0, 00:13:45.191 "current_io_qpairs": 0, 00:13:45.191 "pending_bdev_io": 0, 00:13:45.191 "completed_nvme_io": 0, 00:13:45.191 "transports": [ 00:13:45.191 { 00:13:45.191 "trtype": "TCP" 00:13:45.191 } 00:13:45.191 ] 00:13:45.191 }, 00:13:45.191 { 00:13:45.191 "name": "nvmf_tgt_poll_group_3", 00:13:45.191 "admin_qpairs": 0, 00:13:45.191 "io_qpairs": 0, 00:13:45.191 "current_admin_qpairs": 0, 00:13:45.191 "current_io_qpairs": 0, 00:13:45.191 "pending_bdev_io": 0, 00:13:45.191 "completed_nvme_io": 0, 00:13:45.191 "transports": [ 00:13:45.191 { 00:13:45.191 "trtype": "TCP" 00:13:45.191 } 00:13:45.191 ] 00:13:45.191 } 00:13:45.191 ] 00:13:45.191 }' 00:13:45.191 13:43:47 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:45.191 13:43:47 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:45.191 13:43:47 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:45.191 13:43:47 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:45.191 13:43:47 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:45.191 13:43:47 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:45.191 13:43:47 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:45.191 13:43:47 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:45.191 13:43:47 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:45.191 13:43:47 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:45.191 13:43:47 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:45.191 13:43:47 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:45.191 13:43:47 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:45.191 13:43:47 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:45.191 13:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.191 13:43:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.191 Malloc1 00:13:45.191 13:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.191 13:43:47 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:45.191 13:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.191 13:43:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.191 13:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.191 13:43:47 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:45.191 13:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.191 13:43:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.191 13:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.191 13:43:47 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:45.191 13:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.191 13:43:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.191 13:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.191 13:43:47 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.191 13:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.191 13:43:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.191 [2024-07-11 13:43:47.606922] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.191 13:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.191 13:43:47 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:45.191 13:43:47 -- common/autotest_common.sh@640 -- # local es=0 00:13:45.191 13:43:47 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:45.191 13:43:47 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:45.191 13:43:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:45.191 13:43:47 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:45.191 13:43:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:45.191 13:43:47 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:45.191 13:43:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:45.191 13:43:47 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:45.192 13:43:47 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:45.192 13:43:47 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:45.192 [2024-07-11 13:43:47.631457] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:13:45.451 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:45.451 could not add new controller: failed to write to nvme-fabrics device 00:13:45.451 13:43:47 -- common/autotest_common.sh@643 -- # es=1 00:13:45.451 13:43:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:45.451 13:43:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:45.451 13:43:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:45.451 13:43:47 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:45.451 13:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.451 13:43:47 -- common/autotest_common.sh@10 -- # set +x 00:13:45.451 13:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.451 13:43:47 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.388 13:43:48 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.388 13:43:48 -- common/autotest_common.sh@1177 -- # local i=0 00:13:46.388 13:43:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.388 13:43:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:46.388 13:43:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:48.924 13:43:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:48.924 13:43:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:48.924 13:43:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.924 13:43:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:48.924 13:43:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.924 13:43:50 -- common/autotest_common.sh@1187 -- # return 0 00:13:48.924 13:43:50 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.924 13:43:50 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:48.924 13:43:50 -- common/autotest_common.sh@1198 -- # local i=0 00:13:48.924 13:43:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:48.924 13:43:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.924 13:43:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:48.924 13:43:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.924 13:43:50 -- common/autotest_common.sh@1210 -- # return 0 00:13:48.924 13:43:50 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:48.924 13:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.924 13:43:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.924 13:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.924 13:43:50 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:48.924 13:43:50 -- common/autotest_common.sh@640 -- # local es=0 00:13:48.924 13:43:50 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:48.924 13:43:50 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:48.924 13:43:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:48.924 13:43:50 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:48.924 13:43:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:48.924 13:43:50 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:48.924 13:43:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:48.924 13:43:50 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:48.924 13:43:50 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:48.924 13:43:50 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:48.924 [2024-07-11 13:43:50.895335] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:13:48.924 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:48.924 could not add new controller: failed to write to nvme-fabrics device 00:13:48.924 13:43:50 -- common/autotest_common.sh@643 -- # es=1 00:13:48.924 13:43:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:48.924 13:43:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:48.924 13:43:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:48.924 13:43:50 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:48.925 13:43:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.925 13:43:50 -- common/autotest_common.sh@10 -- # set +x 00:13:48.925 13:43:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.925 13:43:50 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.861 13:43:52 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.861 13:43:52 -- common/autotest_common.sh@1177 -- # local i=0 00:13:49.861 13:43:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.861 13:43:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:49.861 13:43:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:51.766 13:43:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:51.766 13:43:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:51.766 13:43:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.766 13:43:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:51.766 13:43:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.766 13:43:54 -- common/autotest_common.sh@1187 -- # return 0 00:13:51.766 13:43:54 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.766 13:43:54 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.766 13:43:54 -- common/autotest_common.sh@1198 -- # local i=0 00:13:51.766 13:43:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:51.766 13:43:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.766 13:43:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:51.766 13:43:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.766 13:43:54 -- common/autotest_common.sh@1210 -- # return 0 00:13:51.766 13:43:54 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.766 13:43:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.766 13:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:51.766 13:43:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.766 13:43:54 -- target/rpc.sh@81 -- # seq 1 5 00:13:51.766 13:43:54 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:51.766 13:43:54 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:51.766 13:43:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.766 13:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:51.766 13:43:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.766 13:43:54 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.766 13:43:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.766 13:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:51.766 [2024-07-11 13:43:54.151659] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.766 13:43:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.766 13:43:54 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:51.766 13:43:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.766 13:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:51.766 13:43:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.766 13:43:54 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:51.766 13:43:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.766 13:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:51.766 13:43:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.766 13:43:54 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:53.143 13:43:55 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:53.143 13:43:55 -- common/autotest_common.sh@1177 -- # local i=0 00:13:53.143 13:43:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.143 13:43:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:53.143 13:43:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:55.049 13:43:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:55.049 13:43:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:55.049 13:43:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.049 13:43:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:55.049 13:43:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.049 13:43:57 -- common/autotest_common.sh@1187 -- # return 0 00:13:55.049 13:43:57 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.049 13:43:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.049 13:43:57 -- common/autotest_common.sh@1198 -- # local i=0 00:13:55.049 13:43:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:55.049 13:43:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.049 13:43:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:55.049 13:43:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.049 13:43:57 -- common/autotest_common.sh@1210 -- # return 0 00:13:55.049 13:43:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.049 13:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.049 13:43:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.049 13:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.050 13:43:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.050 13:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.050 13:43:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.050 13:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.050 13:43:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:55.050 13:43:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.050 13:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.050 13:43:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.050 13:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.050 13:43:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.050 13:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.050 13:43:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.050 [2024-07-11 13:43:57.479142] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.050 13:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.050 13:43:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:55.050 13:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.050 13:43:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.050 13:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.050 13:43:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.050 13:43:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.050 13:43:57 -- common/autotest_common.sh@10 -- # set +x 00:13:55.050 13:43:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.050 13:43:57 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.430 13:43:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:56.430 13:43:58 -- common/autotest_common.sh@1177 -- # local i=0 00:13:56.430 13:43:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.430 13:43:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:56.430 13:43:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:58.336 13:44:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:58.336 13:44:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:58.336 13:44:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.336 13:44:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:58.336 13:44:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.336 13:44:00 -- common/autotest_common.sh@1187 -- # return 0 00:13:58.336 13:44:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.336 13:44:00 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.336 13:44:00 -- common/autotest_common.sh@1198 -- # local i=0 00:13:58.336 13:44:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:58.336 13:44:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.336 13:44:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:58.336 13:44:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.336 13:44:00 -- common/autotest_common.sh@1210 -- # return 0 00:13:58.336 13:44:00 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.336 13:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.336 13:44:00 -- common/autotest_common.sh@10 -- # set +x 00:13:58.336 13:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.336 13:44:00 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.336 13:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.337 13:44:00 -- common/autotest_common.sh@10 -- # set +x 00:13:58.337 13:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.337 13:44:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:58.337 13:44:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.337 13:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.337 13:44:00 -- common/autotest_common.sh@10 -- # set +x 00:13:58.337 13:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.337 13:44:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.337 13:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.337 13:44:00 -- common/autotest_common.sh@10 -- # set +x 00:13:58.337 [2024-07-11 13:44:00.722812] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.337 13:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.337 13:44:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:58.337 13:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.337 13:44:00 -- common/autotest_common.sh@10 -- # set +x 00:13:58.337 13:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.337 13:44:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.337 13:44:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.337 13:44:00 -- common/autotest_common.sh@10 -- # set +x 00:13:58.337 13:44:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.337 13:44:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:59.717 13:44:01 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:59.717 13:44:01 -- common/autotest_common.sh@1177 -- # local i=0 00:13:59.717 13:44:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.717 13:44:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:59.717 13:44:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:01.620 13:44:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:01.620 13:44:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:01.620 13:44:03 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.620 13:44:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:01.620 13:44:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.620 13:44:03 -- common/autotest_common.sh@1187 -- # return 0 00:14:01.620 13:44:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:01.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.620 13:44:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:01.620 13:44:03 -- common/autotest_common.sh@1198 -- # local i=0 00:14:01.620 13:44:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:01.620 13:44:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.620 13:44:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.620 13:44:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:01.620 13:44:03 -- common/autotest_common.sh@1210 -- # return 0 00:14:01.620 13:44:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.620 13:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.620 13:44:03 -- common/autotest_common.sh@10 -- # set +x 00:14:01.620 13:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.620 13:44:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:01.620 13:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.620 13:44:03 -- common/autotest_common.sh@10 -- # set +x 00:14:01.620 13:44:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.620 13:44:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:01.620 13:44:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:01.620 13:44:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.620 13:44:03 -- common/autotest_common.sh@10 -- # set +x 00:14:01.620 13:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.620 13:44:04 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.620 13:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.620 13:44:04 -- common/autotest_common.sh@10 -- # set +x 00:14:01.620 [2024-07-11 13:44:04.005637] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.620 13:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.620 13:44:04 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:01.620 13:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.620 13:44:04 -- common/autotest_common.sh@10 -- # set +x 00:14:01.620 13:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.620 13:44:04 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:01.620 13:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.620 13:44:04 -- common/autotest_common.sh@10 -- # set +x 00:14:01.620 13:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.620 13:44:04 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:02.994 13:44:05 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:02.994 13:44:05 -- common/autotest_common.sh@1177 -- # local i=0 00:14:02.994 13:44:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.994 13:44:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:02.994 13:44:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:04.896 13:44:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:04.897 13:44:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:04.897 13:44:07 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.897 13:44:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:04.897 13:44:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.897 13:44:07 -- common/autotest_common.sh@1187 -- # return 0 00:14:04.897 13:44:07 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:04.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.897 13:44:07 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:04.897 13:44:07 -- common/autotest_common.sh@1198 -- # local i=0 00:14:04.897 13:44:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:04.897 13:44:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.897 13:44:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:04.897 13:44:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.897 13:44:07 -- common/autotest_common.sh@1210 -- # return 0 00:14:04.897 13:44:07 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.897 13:44:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.897 13:44:07 -- common/autotest_common.sh@10 -- # set +x 00:14:04.897 13:44:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.897 13:44:07 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:04.897 13:44:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.897 13:44:07 -- common/autotest_common.sh@10 -- # set +x 00:14:04.897 13:44:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.897 13:44:07 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:04.897 13:44:07 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:04.897 13:44:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.897 13:44:07 -- common/autotest_common.sh@10 -- # set +x 00:14:04.897 13:44:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.897 13:44:07 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.897 13:44:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.897 13:44:07 -- common/autotest_common.sh@10 -- # set +x 00:14:04.897 [2024-07-11 13:44:07.343890] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.897 13:44:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.897 13:44:07 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:04.897 13:44:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.897 13:44:07 -- common/autotest_common.sh@10 -- # set +x 00:14:05.156 13:44:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.156 13:44:07 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:05.156 13:44:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.156 13:44:07 -- common/autotest_common.sh@10 -- # set +x 00:14:05.156 13:44:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.156 13:44:07 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.094 13:44:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:06.094 13:44:08 -- common/autotest_common.sh@1177 -- # local i=0 00:14:06.094 13:44:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.094 13:44:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:06.094 13:44:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:08.696 13:44:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:08.696 13:44:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:08.696 13:44:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.696 13:44:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:08.696 13:44:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.696 13:44:10 -- common/autotest_common.sh@1187 -- # return 0 00:14:08.696 13:44:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.696 13:44:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:08.696 13:44:10 -- common/autotest_common.sh@1198 -- # local i=0 00:14:08.696 13:44:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:08.696 13:44:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.696 13:44:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:08.696 13:44:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.696 13:44:10 -- common/autotest_common.sh@1210 -- # return 0 00:14:08.696 13:44:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.696 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.696 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.696 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.696 13:44:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.696 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.696 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.696 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.696 13:44:10 -- target/rpc.sh@99 -- # seq 1 5 00:14:08.696 13:44:10 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:08.696 13:44:10 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.696 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.696 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.696 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.696 13:44:10 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.696 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.696 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.696 [2024-07-11 13:44:10.727750] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.696 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.696 13:44:10 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.696 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.696 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.696 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.696 13:44:10 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.696 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.696 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.696 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.696 13:44:10 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.696 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.696 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.696 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.696 13:44:10 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.696 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.696 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.696 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.696 13:44:10 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:08.696 13:44:10 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.696 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.696 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 [2024-07-11 13:44:10.775860] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:08.697 13:44:10 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 [2024-07-11 13:44:10.824000] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:08.697 13:44:10 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 [2024-07-11 13:44:10.876202] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:08.697 13:44:10 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 [2024-07-11 13:44:10.924359] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:08.697 13:44:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.697 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:14:08.697 13:44:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.697 13:44:10 -- target/rpc.sh@110 -- # stats='{ 00:14:08.697 "tick_rate": 2300000000, 00:14:08.697 "poll_groups": [ 00:14:08.697 { 00:14:08.697 "name": "nvmf_tgt_poll_group_0", 00:14:08.697 "admin_qpairs": 2, 00:14:08.697 "io_qpairs": 168, 00:14:08.697 "current_admin_qpairs": 0, 00:14:08.697 "current_io_qpairs": 0, 00:14:08.697 "pending_bdev_io": 0, 00:14:08.697 "completed_nvme_io": 209, 00:14:08.697 "transports": [ 00:14:08.697 { 00:14:08.697 "trtype": "TCP" 00:14:08.697 } 00:14:08.697 ] 00:14:08.697 }, 00:14:08.697 { 00:14:08.697 "name": "nvmf_tgt_poll_group_1", 00:14:08.697 "admin_qpairs": 2, 00:14:08.697 "io_qpairs": 168, 00:14:08.697 "current_admin_qpairs": 0, 00:14:08.697 "current_io_qpairs": 0, 00:14:08.697 "pending_bdev_io": 0, 00:14:08.697 "completed_nvme_io": 299, 00:14:08.697 "transports": [ 00:14:08.697 { 00:14:08.697 "trtype": "TCP" 00:14:08.697 } 00:14:08.697 ] 00:14:08.697 }, 00:14:08.697 { 00:14:08.698 "name": "nvmf_tgt_poll_group_2", 00:14:08.698 "admin_qpairs": 1, 00:14:08.698 "io_qpairs": 168, 00:14:08.698 "current_admin_qpairs": 0, 00:14:08.698 "current_io_qpairs": 0, 00:14:08.698 "pending_bdev_io": 0, 00:14:08.698 "completed_nvme_io": 282, 00:14:08.698 "transports": [ 00:14:08.698 { 00:14:08.698 "trtype": "TCP" 00:14:08.698 } 00:14:08.698 ] 00:14:08.698 }, 00:14:08.698 { 00:14:08.698 "name": "nvmf_tgt_poll_group_3", 00:14:08.698 "admin_qpairs": 2, 00:14:08.698 "io_qpairs": 168, 00:14:08.698 "current_admin_qpairs": 0, 00:14:08.698 "current_io_qpairs": 0, 00:14:08.698 "pending_bdev_io": 0, 00:14:08.698 "completed_nvme_io": 232, 00:14:08.698 "transports": [ 00:14:08.698 { 00:14:08.698 "trtype": "TCP" 00:14:08.698 } 00:14:08.698 ] 00:14:08.698 } 00:14:08.698 ] 00:14:08.698 }' 00:14:08.698 13:44:10 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:08.698 13:44:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:08.698 13:44:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:08.698 13:44:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:08.698 13:44:11 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:08.698 13:44:11 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:08.698 13:44:11 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:08.698 13:44:11 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:08.698 13:44:11 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:08.698 13:44:11 -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:14:08.698 13:44:11 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:08.698 13:44:11 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:08.698 13:44:11 -- target/rpc.sh@123 -- # nvmftestfini 00:14:08.698 13:44:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:08.698 13:44:11 -- nvmf/common.sh@116 -- # sync 00:14:08.698 13:44:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:08.698 13:44:11 -- nvmf/common.sh@119 -- # set +e 00:14:08.698 13:44:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:08.698 13:44:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:08.698 rmmod nvme_tcp 00:14:08.698 rmmod nvme_fabrics 00:14:08.698 rmmod nvme_keyring 00:14:08.698 13:44:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:08.698 13:44:11 -- nvmf/common.sh@123 -- # set -e 00:14:08.698 13:44:11 -- nvmf/common.sh@124 -- # return 0 00:14:08.698 13:44:11 -- nvmf/common.sh@477 -- # '[' -n 1513510 ']' 00:14:08.698 13:44:11 -- nvmf/common.sh@478 -- # killprocess 1513510 00:14:08.698 13:44:11 -- common/autotest_common.sh@926 -- # '[' -z 1513510 ']' 00:14:08.698 13:44:11 -- common/autotest_common.sh@930 -- # kill -0 1513510 00:14:08.698 13:44:11 -- common/autotest_common.sh@931 -- # uname 00:14:08.698 13:44:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:08.698 13:44:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1513510 00:14:08.957 13:44:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:08.957 13:44:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:08.957 13:44:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1513510' 00:14:08.957 killing process with pid 1513510 00:14:08.957 13:44:11 -- common/autotest_common.sh@945 -- # kill 1513510 00:14:08.957 13:44:11 -- common/autotest_common.sh@950 -- # wait 1513510 00:14:08.957 13:44:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:08.957 13:44:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:08.957 13:44:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:08.957 13:44:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:08.957 13:44:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:08.957 13:44:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.957 13:44:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.957 13:44:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.495 13:44:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:11.495 00:14:11.495 real 0m32.368s 00:14:11.495 user 1m40.391s 00:14:11.495 sys 0m5.641s 00:14:11.495 13:44:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.495 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:14:11.495 ************************************ 00:14:11.495 END TEST nvmf_rpc 00:14:11.495 ************************************ 00:14:11.495 13:44:13 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:11.495 13:44:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:11.495 13:44:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:11.495 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:14:11.495 ************************************ 00:14:11.495 START TEST nvmf_invalid 00:14:11.495 ************************************ 00:14:11.495 13:44:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:11.495 * Looking for test storage... 00:14:11.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.495 13:44:13 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.495 13:44:13 -- nvmf/common.sh@7 -- # uname -s 00:14:11.495 13:44:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.495 13:44:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.495 13:44:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.495 13:44:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.495 13:44:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.495 13:44:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.495 13:44:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.495 13:44:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.495 13:44:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.495 13:44:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.495 13:44:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:11.495 13:44:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:11.495 13:44:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.495 13:44:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.495 13:44:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.495 13:44:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.495 13:44:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.495 13:44:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.495 13:44:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.495 13:44:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.495 13:44:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.495 13:44:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.495 13:44:13 -- paths/export.sh@5 -- # export PATH 00:14:11.495 13:44:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.495 13:44:13 -- nvmf/common.sh@46 -- # : 0 00:14:11.495 13:44:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:11.495 13:44:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:11.495 13:44:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:11.495 13:44:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.495 13:44:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.495 13:44:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:11.495 13:44:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:11.495 13:44:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:11.495 13:44:13 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:11.495 13:44:13 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.495 13:44:13 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:11.495 13:44:13 -- target/invalid.sh@14 -- # target=foobar 00:14:11.496 13:44:13 -- target/invalid.sh@16 -- # RANDOM=0 00:14:11.496 13:44:13 -- target/invalid.sh@34 -- # nvmftestinit 00:14:11.496 13:44:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:11.496 13:44:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.496 13:44:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:11.496 13:44:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:11.496 13:44:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:11.496 13:44:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.496 13:44:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.496 13:44:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.496 13:44:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:11.496 13:44:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:11.496 13:44:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:11.496 13:44:13 -- common/autotest_common.sh@10 -- # set +x 00:14:16.774 13:44:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:16.774 13:44:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:16.774 13:44:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:16.774 13:44:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:16.774 13:44:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:16.774 13:44:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:16.774 13:44:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:16.774 13:44:18 -- nvmf/common.sh@294 -- # net_devs=() 00:14:16.774 13:44:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:16.774 13:44:18 -- nvmf/common.sh@295 -- # e810=() 00:14:16.774 13:44:18 -- nvmf/common.sh@295 -- # local -ga e810 00:14:16.774 13:44:18 -- nvmf/common.sh@296 -- # x722=() 00:14:16.774 13:44:18 -- nvmf/common.sh@296 -- # local -ga x722 00:14:16.774 13:44:18 -- nvmf/common.sh@297 -- # mlx=() 00:14:16.774 13:44:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:16.774 13:44:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.774 13:44:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.774 13:44:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.774 13:44:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.774 13:44:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.774 13:44:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.774 13:44:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.774 13:44:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.774 13:44:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.774 13:44:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.774 13:44:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.774 13:44:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:16.774 13:44:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:16.774 13:44:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:16.774 13:44:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:16.774 13:44:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:16.774 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:16.774 13:44:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:16.774 13:44:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:16.774 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:16.774 13:44:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:16.774 13:44:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:16.774 13:44:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:16.774 13:44:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.774 13:44:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:16.774 13:44:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.774 13:44:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:16.774 Found net devices under 0000:86:00.0: cvl_0_0 00:14:16.774 13:44:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.774 13:44:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:16.774 13:44:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.774 13:44:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:16.774 13:44:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.774 13:44:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:16.774 Found net devices under 0000:86:00.1: cvl_0_1 00:14:16.774 13:44:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.774 13:44:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:16.775 13:44:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:16.775 13:44:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:16.775 13:44:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:16.775 13:44:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:16.775 13:44:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.775 13:44:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.775 13:44:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.775 13:44:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:16.775 13:44:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.775 13:44:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.775 13:44:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:16.775 13:44:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.775 13:44:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.775 13:44:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:16.775 13:44:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:16.775 13:44:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.775 13:44:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.775 13:44:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.775 13:44:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.775 13:44:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:16.775 13:44:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.775 13:44:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.775 13:44:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.775 13:44:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:16.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:14:16.775 00:14:16.775 --- 10.0.0.2 ping statistics --- 00:14:16.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.775 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:14:16.775 13:44:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:14:16.775 00:14:16.775 --- 10.0.0.1 ping statistics --- 00:14:16.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.775 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:14:16.775 13:44:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.775 13:44:18 -- nvmf/common.sh@410 -- # return 0 00:14:16.775 13:44:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:16.775 13:44:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.775 13:44:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:16.775 13:44:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:16.775 13:44:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.775 13:44:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:16.775 13:44:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:16.775 13:44:18 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:16.775 13:44:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:16.775 13:44:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:16.775 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:14:16.775 13:44:18 -- nvmf/common.sh@469 -- # nvmfpid=1521200 00:14:16.775 13:44:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:16.775 13:44:18 -- nvmf/common.sh@470 -- # waitforlisten 1521200 00:14:16.775 13:44:18 -- common/autotest_common.sh@819 -- # '[' -z 1521200 ']' 00:14:16.775 13:44:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.775 13:44:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:16.775 13:44:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.775 13:44:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:16.775 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:14:16.775 [2024-07-11 13:44:18.709390] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:16.775 [2024-07-11 13:44:18.709432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.775 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.775 [2024-07-11 13:44:18.769671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:16.775 [2024-07-11 13:44:18.809762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:16.775 [2024-07-11 13:44:18.809870] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.775 [2024-07-11 13:44:18.809878] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.775 [2024-07-11 13:44:18.809885] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.775 [2024-07-11 13:44:18.809926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.775 [2024-07-11 13:44:18.810003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.775 [2024-07-11 13:44:18.810021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.775 [2024-07-11 13:44:18.810027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.343 13:44:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:17.343 13:44:19 -- common/autotest_common.sh@852 -- # return 0 00:14:17.343 13:44:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:17.343 13:44:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:17.343 13:44:19 -- common/autotest_common.sh@10 -- # set +x 00:14:17.343 13:44:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.343 13:44:19 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:17.343 13:44:19 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23648 00:14:17.343 [2024-07-11 13:44:19.709055] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:17.343 13:44:19 -- target/invalid.sh@40 -- # out='request: 00:14:17.343 { 00:14:17.343 "nqn": "nqn.2016-06.io.spdk:cnode23648", 00:14:17.343 "tgt_name": "foobar", 00:14:17.343 "method": "nvmf_create_subsystem", 00:14:17.343 "req_id": 1 00:14:17.343 } 00:14:17.343 Got JSON-RPC error response 00:14:17.343 response: 00:14:17.343 { 00:14:17.343 "code": -32603, 00:14:17.343 "message": "Unable to find target foobar" 00:14:17.343 }' 00:14:17.343 13:44:19 -- target/invalid.sh@41 -- # [[ request: 00:14:17.343 { 00:14:17.343 "nqn": "nqn.2016-06.io.spdk:cnode23648", 00:14:17.343 "tgt_name": "foobar", 00:14:17.343 "method": "nvmf_create_subsystem", 00:14:17.343 "req_id": 1 00:14:17.343 } 00:14:17.343 Got JSON-RPC error response 00:14:17.343 response: 00:14:17.343 { 00:14:17.343 "code": -32603, 00:14:17.343 "message": "Unable to find target foobar" 00:14:17.343 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:17.343 13:44:19 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:17.343 13:44:19 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1774 00:14:17.603 [2024-07-11 13:44:19.893732] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1774: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:17.603 13:44:19 -- target/invalid.sh@45 -- # out='request: 00:14:17.603 { 00:14:17.603 "nqn": "nqn.2016-06.io.spdk:cnode1774", 00:14:17.603 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:17.603 "method": "nvmf_create_subsystem", 00:14:17.603 "req_id": 1 00:14:17.603 } 00:14:17.603 Got JSON-RPC error response 00:14:17.603 response: 00:14:17.603 { 00:14:17.603 "code": -32602, 00:14:17.603 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:17.603 }' 00:14:17.603 13:44:19 -- target/invalid.sh@46 -- # [[ request: 00:14:17.603 { 00:14:17.603 "nqn": "nqn.2016-06.io.spdk:cnode1774", 00:14:17.603 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:17.603 "method": "nvmf_create_subsystem", 00:14:17.603 "req_id": 1 00:14:17.603 } 00:14:17.603 Got JSON-RPC error response 00:14:17.603 response: 00:14:17.603 { 00:14:17.603 "code": -32602, 00:14:17.603 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:17.603 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:17.603 13:44:19 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:17.603 13:44:19 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode900 00:14:17.862 [2024-07-11 13:44:20.086340] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode900: invalid model number 'SPDK_Controller' 00:14:17.862 13:44:20 -- target/invalid.sh@50 -- # out='request: 00:14:17.862 { 00:14:17.862 "nqn": "nqn.2016-06.io.spdk:cnode900", 00:14:17.862 "model_number": "SPDK_Controller\u001f", 00:14:17.862 "method": "nvmf_create_subsystem", 00:14:17.862 "req_id": 1 00:14:17.862 } 00:14:17.862 Got JSON-RPC error response 00:14:17.862 response: 00:14:17.862 { 00:14:17.862 "code": -32602, 00:14:17.862 "message": "Invalid MN SPDK_Controller\u001f" 00:14:17.862 }' 00:14:17.862 13:44:20 -- target/invalid.sh@51 -- # [[ request: 00:14:17.862 { 00:14:17.862 "nqn": "nqn.2016-06.io.spdk:cnode900", 00:14:17.862 "model_number": "SPDK_Controller\u001f", 00:14:17.862 "method": "nvmf_create_subsystem", 00:14:17.862 "req_id": 1 00:14:17.862 } 00:14:17.862 Got JSON-RPC error response 00:14:17.862 response: 00:14:17.862 { 00:14:17.862 "code": -32602, 00:14:17.862 "message": "Invalid MN SPDK_Controller\u001f" 00:14:17.862 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:17.862 13:44:20 -- target/invalid.sh@54 -- # gen_random_s 21 00:14:17.862 13:44:20 -- target/invalid.sh@19 -- # local length=21 ll 00:14:17.862 13:44:20 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:17.863 13:44:20 -- target/invalid.sh@21 -- # local chars 00:14:17.863 13:44:20 -- target/invalid.sh@22 -- # local string 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 89 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=Y 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 44 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=, 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 100 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=d 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 51 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=3 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 89 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=Y 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 93 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=']' 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 82 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=R 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 90 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=Z 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 63 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+='?' 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 108 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=l 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 64 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=@ 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 88 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=X 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 53 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=5 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 52 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=4 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 32 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=' ' 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 62 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+='>' 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 115 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=s 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 77 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=M 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 88 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=X 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 104 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+=h 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # printf %x 94 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:17.863 13:44:20 -- target/invalid.sh@25 -- # string+='^' 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:17.863 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:17.863 13:44:20 -- target/invalid.sh@28 -- # [[ Y == \- ]] 00:14:17.863 13:44:20 -- target/invalid.sh@31 -- # echo 'Y,d3Y]RZ?l@X54 >sMXh^' 00:14:17.863 13:44:20 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Y,d3Y]RZ?l@X54 >sMXh^' nqn.2016-06.io.spdk:cnode26985 00:14:18.123 [2024-07-11 13:44:20.399425] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26985: invalid serial number 'Y,d3Y]RZ?l@X54 >sMXh^' 00:14:18.123 13:44:20 -- target/invalid.sh@54 -- # out='request: 00:14:18.123 { 00:14:18.123 "nqn": "nqn.2016-06.io.spdk:cnode26985", 00:14:18.123 "serial_number": "Y,d3Y]RZ?l@X54 >sMXh^", 00:14:18.123 "method": "nvmf_create_subsystem", 00:14:18.123 "req_id": 1 00:14:18.123 } 00:14:18.123 Got JSON-RPC error response 00:14:18.123 response: 00:14:18.123 { 00:14:18.123 "code": -32602, 00:14:18.123 "message": "Invalid SN Y,d3Y]RZ?l@X54 >sMXh^" 00:14:18.123 }' 00:14:18.123 13:44:20 -- target/invalid.sh@55 -- # [[ request: 00:14:18.123 { 00:14:18.123 "nqn": "nqn.2016-06.io.spdk:cnode26985", 00:14:18.123 "serial_number": "Y,d3Y]RZ?l@X54 >sMXh^", 00:14:18.123 "method": "nvmf_create_subsystem", 00:14:18.123 "req_id": 1 00:14:18.123 } 00:14:18.123 Got JSON-RPC error response 00:14:18.123 response: 00:14:18.123 { 00:14:18.123 "code": -32602, 00:14:18.123 "message": "Invalid SN Y,d3Y]RZ?l@X54 >sMXh^" 00:14:18.123 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:18.123 13:44:20 -- target/invalid.sh@58 -- # gen_random_s 41 00:14:18.123 13:44:20 -- target/invalid.sh@19 -- # local length=41 ll 00:14:18.123 13:44:20 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:18.123 13:44:20 -- target/invalid.sh@21 -- # local chars 00:14:18.123 13:44:20 -- target/invalid.sh@22 -- # local string 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 64 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=@ 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 32 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=' ' 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 113 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=q 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 75 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=K 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 35 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+='#' 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 79 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=O 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 116 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=t 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 38 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+='&' 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 42 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+='*' 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 80 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=P 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 85 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=U 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 34 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+='"' 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 65 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=A 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 85 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=U 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 49 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=1 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 87 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=W 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 63 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+='?' 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 77 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=M 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 127 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=$'\177' 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 58 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=: 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 63 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+='?' 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 71 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+=G 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # printf %x 61 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:18.123 13:44:20 -- target/invalid.sh@25 -- # string+== 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.123 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # printf %x 36 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # string+='$' 00:14:18.382 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.382 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # printf %x 43 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # string+=+ 00:14:18.382 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.382 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # printf %x 111 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # string+=o 00:14:18.382 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.382 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # printf %x 96 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # string+='`' 00:14:18.382 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.382 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # printf %x 73 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # string+=I 00:14:18.382 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.382 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # printf %x 96 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # string+='`' 00:14:18.382 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.382 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.382 13:44:20 -- target/invalid.sh@25 -- # printf %x 36 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # string+='$' 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # printf %x 37 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # string+=% 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # printf %x 104 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # string+=h 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # printf %x 115 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # string+=s 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # printf %x 92 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # string+='\' 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # printf %x 42 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # string+='*' 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # printf %x 89 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # string+=Y 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # printf %x 62 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # string+='>' 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # printf %x 85 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # string+=U 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # printf %x 126 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # string+='~' 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # printf %x 70 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # string+=F 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # printf %x 40 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:18.383 13:44:20 -- target/invalid.sh@25 -- # string+='(' 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:18.383 13:44:20 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:18.383 13:44:20 -- target/invalid.sh@28 -- # [[ @ == \- ]] 00:14:18.383 13:44:20 -- target/invalid.sh@31 -- # echo '@ qK#Ot&*PU"AU1W?M:?G=$+o`I`$%hs\*Y>U~F(' 00:14:18.383 13:44:20 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '@ qK#Ot&*PU"AU1W?M:?G=$+o`I`$%hs\*Y>U~F(' nqn.2016-06.io.spdk:cnode4676 00:14:18.383 [2024-07-11 13:44:20.832939] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4676: invalid model number '@ qK#Ot&*PU"AU1W?M:?G=$+o`I`$%hs\*Y>U~F(' 00:14:18.641 13:44:20 -- target/invalid.sh@58 -- # out='request: 00:14:18.641 { 00:14:18.641 "nqn": "nqn.2016-06.io.spdk:cnode4676", 00:14:18.641 "model_number": "@ qK#Ot&*PU\"AU1W?M\u007f:?G=$+o`I`$%hs\\*Y>U~F(", 00:14:18.641 "method": "nvmf_create_subsystem", 00:14:18.641 "req_id": 1 00:14:18.641 } 00:14:18.641 Got JSON-RPC error response 00:14:18.641 response: 00:14:18.641 { 00:14:18.641 "code": -32602, 00:14:18.641 "message": "Invalid MN @ qK#Ot&*PU\"AU1W?M\u007f:?G=$+o`I`$%hs\\*Y>U~F(" 00:14:18.641 }' 00:14:18.641 13:44:20 -- target/invalid.sh@59 -- # [[ request: 00:14:18.641 { 00:14:18.641 "nqn": "nqn.2016-06.io.spdk:cnode4676", 00:14:18.641 "model_number": "@ qK#Ot&*PU\"AU1W?M\u007f:?G=$+o`I`$%hs\\*Y>U~F(", 00:14:18.641 "method": "nvmf_create_subsystem", 00:14:18.641 "req_id": 1 00:14:18.641 } 00:14:18.641 Got JSON-RPC error response 00:14:18.641 response: 00:14:18.641 { 00:14:18.641 "code": -32602, 00:14:18.641 "message": "Invalid MN @ qK#Ot&*PU\"AU1W?M\u007f:?G=$+o`I`$%hs\\*Y>U~F(" 00:14:18.641 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:18.641 13:44:20 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:18.641 [2024-07-11 13:44:21.013611] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.642 13:44:21 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:18.900 13:44:21 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:18.900 13:44:21 -- target/invalid.sh@67 -- # echo '' 00:14:18.900 13:44:21 -- target/invalid.sh@67 -- # head -n 1 00:14:18.900 13:44:21 -- target/invalid.sh@67 -- # IP= 00:14:18.900 13:44:21 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:19.159 [2024-07-11 13:44:21.366878] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:19.159 13:44:21 -- target/invalid.sh@69 -- # out='request: 00:14:19.159 { 00:14:19.159 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:19.159 "listen_address": { 00:14:19.159 "trtype": "tcp", 00:14:19.159 "traddr": "", 00:14:19.159 "trsvcid": "4421" 00:14:19.159 }, 00:14:19.159 "method": "nvmf_subsystem_remove_listener", 00:14:19.159 "req_id": 1 00:14:19.159 } 00:14:19.159 Got JSON-RPC error response 00:14:19.159 response: 00:14:19.159 { 00:14:19.159 "code": -32602, 00:14:19.159 "message": "Invalid parameters" 00:14:19.159 }' 00:14:19.159 13:44:21 -- target/invalid.sh@70 -- # [[ request: 00:14:19.159 { 00:14:19.159 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:19.159 "listen_address": { 00:14:19.159 "trtype": "tcp", 00:14:19.159 "traddr": "", 00:14:19.159 "trsvcid": "4421" 00:14:19.159 }, 00:14:19.159 "method": "nvmf_subsystem_remove_listener", 00:14:19.159 "req_id": 1 00:14:19.159 } 00:14:19.159 Got JSON-RPC error response 00:14:19.159 response: 00:14:19.159 { 00:14:19.159 "code": -32602, 00:14:19.159 "message": "Invalid parameters" 00:14:19.159 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:19.159 13:44:21 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4117 -i 0 00:14:19.159 [2024-07-11 13:44:21.535403] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4117: invalid cntlid range [0-65519] 00:14:19.159 13:44:21 -- target/invalid.sh@73 -- # out='request: 00:14:19.159 { 00:14:19.159 "nqn": "nqn.2016-06.io.spdk:cnode4117", 00:14:19.159 "min_cntlid": 0, 00:14:19.159 "method": "nvmf_create_subsystem", 00:14:19.159 "req_id": 1 00:14:19.159 } 00:14:19.159 Got JSON-RPC error response 00:14:19.159 response: 00:14:19.159 { 00:14:19.159 "code": -32602, 00:14:19.159 "message": "Invalid cntlid range [0-65519]" 00:14:19.159 }' 00:14:19.159 13:44:21 -- target/invalid.sh@74 -- # [[ request: 00:14:19.159 { 00:14:19.159 "nqn": "nqn.2016-06.io.spdk:cnode4117", 00:14:19.159 "min_cntlid": 0, 00:14:19.159 "method": "nvmf_create_subsystem", 00:14:19.159 "req_id": 1 00:14:19.159 } 00:14:19.159 Got JSON-RPC error response 00:14:19.159 response: 00:14:19.159 { 00:14:19.159 "code": -32602, 00:14:19.159 "message": "Invalid cntlid range [0-65519]" 00:14:19.159 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:19.159 13:44:21 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32138 -i 65520 00:14:19.417 [2024-07-11 13:44:21.724057] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32138: invalid cntlid range [65520-65519] 00:14:19.417 13:44:21 -- target/invalid.sh@75 -- # out='request: 00:14:19.417 { 00:14:19.417 "nqn": "nqn.2016-06.io.spdk:cnode32138", 00:14:19.417 "min_cntlid": 65520, 00:14:19.417 "method": "nvmf_create_subsystem", 00:14:19.417 "req_id": 1 00:14:19.417 } 00:14:19.417 Got JSON-RPC error response 00:14:19.417 response: 00:14:19.417 { 00:14:19.417 "code": -32602, 00:14:19.417 "message": "Invalid cntlid range [65520-65519]" 00:14:19.417 }' 00:14:19.417 13:44:21 -- target/invalid.sh@76 -- # [[ request: 00:14:19.417 { 00:14:19.417 "nqn": "nqn.2016-06.io.spdk:cnode32138", 00:14:19.417 "min_cntlid": 65520, 00:14:19.417 "method": "nvmf_create_subsystem", 00:14:19.417 "req_id": 1 00:14:19.417 } 00:14:19.417 Got JSON-RPC error response 00:14:19.417 response: 00:14:19.417 { 00:14:19.417 "code": -32602, 00:14:19.417 "message": "Invalid cntlid range [65520-65519]" 00:14:19.417 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:19.417 13:44:21 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8402 -I 0 00:14:19.676 [2024-07-11 13:44:21.912738] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8402: invalid cntlid range [1-0] 00:14:19.676 13:44:21 -- target/invalid.sh@77 -- # out='request: 00:14:19.676 { 00:14:19.676 "nqn": "nqn.2016-06.io.spdk:cnode8402", 00:14:19.676 "max_cntlid": 0, 00:14:19.676 "method": "nvmf_create_subsystem", 00:14:19.676 "req_id": 1 00:14:19.676 } 00:14:19.676 Got JSON-RPC error response 00:14:19.676 response: 00:14:19.676 { 00:14:19.676 "code": -32602, 00:14:19.676 "message": "Invalid cntlid range [1-0]" 00:14:19.676 }' 00:14:19.676 13:44:21 -- target/invalid.sh@78 -- # [[ request: 00:14:19.676 { 00:14:19.676 "nqn": "nqn.2016-06.io.spdk:cnode8402", 00:14:19.676 "max_cntlid": 0, 00:14:19.676 "method": "nvmf_create_subsystem", 00:14:19.676 "req_id": 1 00:14:19.676 } 00:14:19.676 Got JSON-RPC error response 00:14:19.676 response: 00:14:19.676 { 00:14:19.676 "code": -32602, 00:14:19.676 "message": "Invalid cntlid range [1-0]" 00:14:19.676 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:19.676 13:44:21 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24186 -I 65520 00:14:19.676 [2024-07-11 13:44:22.089353] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24186: invalid cntlid range [1-65520] 00:14:19.676 13:44:22 -- target/invalid.sh@79 -- # out='request: 00:14:19.676 { 00:14:19.676 "nqn": "nqn.2016-06.io.spdk:cnode24186", 00:14:19.676 "max_cntlid": 65520, 00:14:19.676 "method": "nvmf_create_subsystem", 00:14:19.676 "req_id": 1 00:14:19.676 } 00:14:19.676 Got JSON-RPC error response 00:14:19.676 response: 00:14:19.676 { 00:14:19.676 "code": -32602, 00:14:19.676 "message": "Invalid cntlid range [1-65520]" 00:14:19.676 }' 00:14:19.676 13:44:22 -- target/invalid.sh@80 -- # [[ request: 00:14:19.676 { 00:14:19.676 "nqn": "nqn.2016-06.io.spdk:cnode24186", 00:14:19.676 "max_cntlid": 65520, 00:14:19.676 "method": "nvmf_create_subsystem", 00:14:19.676 "req_id": 1 00:14:19.676 } 00:14:19.676 Got JSON-RPC error response 00:14:19.676 response: 00:14:19.676 { 00:14:19.676 "code": -32602, 00:14:19.676 "message": "Invalid cntlid range [1-65520]" 00:14:19.676 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:19.676 13:44:22 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27403 -i 6 -I 5 00:14:19.935 [2024-07-11 13:44:22.269964] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27403: invalid cntlid range [6-5] 00:14:19.935 13:44:22 -- target/invalid.sh@83 -- # out='request: 00:14:19.935 { 00:14:19.935 "nqn": "nqn.2016-06.io.spdk:cnode27403", 00:14:19.935 "min_cntlid": 6, 00:14:19.935 "max_cntlid": 5, 00:14:19.935 "method": "nvmf_create_subsystem", 00:14:19.935 "req_id": 1 00:14:19.935 } 00:14:19.935 Got JSON-RPC error response 00:14:19.935 response: 00:14:19.935 { 00:14:19.935 "code": -32602, 00:14:19.935 "message": "Invalid cntlid range [6-5]" 00:14:19.935 }' 00:14:19.935 13:44:22 -- target/invalid.sh@84 -- # [[ request: 00:14:19.935 { 00:14:19.935 "nqn": "nqn.2016-06.io.spdk:cnode27403", 00:14:19.935 "min_cntlid": 6, 00:14:19.935 "max_cntlid": 5, 00:14:19.935 "method": "nvmf_create_subsystem", 00:14:19.935 "req_id": 1 00:14:19.935 } 00:14:19.935 Got JSON-RPC error response 00:14:19.935 response: 00:14:19.935 { 00:14:19.935 "code": -32602, 00:14:19.935 "message": "Invalid cntlid range [6-5]" 00:14:19.935 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:19.935 13:44:22 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:20.193 13:44:22 -- target/invalid.sh@87 -- # out='request: 00:14:20.193 { 00:14:20.193 "name": "foobar", 00:14:20.193 "method": "nvmf_delete_target", 00:14:20.194 "req_id": 1 00:14:20.194 } 00:14:20.194 Got JSON-RPC error response 00:14:20.194 response: 00:14:20.194 { 00:14:20.194 "code": -32602, 00:14:20.194 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:20.194 }' 00:14:20.194 13:44:22 -- target/invalid.sh@88 -- # [[ request: 00:14:20.194 { 00:14:20.194 "name": "foobar", 00:14:20.194 "method": "nvmf_delete_target", 00:14:20.194 "req_id": 1 00:14:20.194 } 00:14:20.194 Got JSON-RPC error response 00:14:20.194 response: 00:14:20.194 { 00:14:20.194 "code": -32602, 00:14:20.194 "message": "The specified target doesn't exist, cannot delete it." 00:14:20.194 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:20.194 13:44:22 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:20.194 13:44:22 -- target/invalid.sh@91 -- # nvmftestfini 00:14:20.194 13:44:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:20.194 13:44:22 -- nvmf/common.sh@116 -- # sync 00:14:20.194 13:44:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:20.194 13:44:22 -- nvmf/common.sh@119 -- # set +e 00:14:20.194 13:44:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:20.194 13:44:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:20.194 rmmod nvme_tcp 00:14:20.194 rmmod nvme_fabrics 00:14:20.194 rmmod nvme_keyring 00:14:20.194 13:44:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:20.194 13:44:22 -- nvmf/common.sh@123 -- # set -e 00:14:20.194 13:44:22 -- nvmf/common.sh@124 -- # return 0 00:14:20.194 13:44:22 -- nvmf/common.sh@477 -- # '[' -n 1521200 ']' 00:14:20.194 13:44:22 -- nvmf/common.sh@478 -- # killprocess 1521200 00:14:20.194 13:44:22 -- common/autotest_common.sh@926 -- # '[' -z 1521200 ']' 00:14:20.194 13:44:22 -- common/autotest_common.sh@930 -- # kill -0 1521200 00:14:20.194 13:44:22 -- common/autotest_common.sh@931 -- # uname 00:14:20.194 13:44:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:20.194 13:44:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1521200 00:14:20.194 13:44:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:20.194 13:44:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:20.194 13:44:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1521200' 00:14:20.194 killing process with pid 1521200 00:14:20.194 13:44:22 -- common/autotest_common.sh@945 -- # kill 1521200 00:14:20.194 13:44:22 -- common/autotest_common.sh@950 -- # wait 1521200 00:14:20.452 13:44:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:20.452 13:44:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:20.452 13:44:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:20.452 13:44:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.452 13:44:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:20.452 13:44:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.452 13:44:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.452 13:44:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.355 13:44:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:22.355 00:14:22.355 real 0m11.270s 00:14:22.355 user 0m19.074s 00:14:22.355 sys 0m4.785s 00:14:22.355 13:44:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.355 13:44:24 -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 ************************************ 00:14:22.355 END TEST nvmf_invalid 00:14:22.355 ************************************ 00:14:22.355 13:44:24 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:22.355 13:44:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:22.355 13:44:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:22.355 13:44:24 -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 ************************************ 00:14:22.355 START TEST nvmf_abort 00:14:22.355 ************************************ 00:14:22.355 13:44:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:22.615 * Looking for test storage... 00:14:22.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.615 13:44:24 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.615 13:44:24 -- nvmf/common.sh@7 -- # uname -s 00:14:22.615 13:44:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.615 13:44:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.615 13:44:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.615 13:44:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.615 13:44:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.615 13:44:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.615 13:44:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.615 13:44:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.615 13:44:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.615 13:44:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.615 13:44:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:22.615 13:44:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:22.615 13:44:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.615 13:44:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.615 13:44:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.615 13:44:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.615 13:44:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.615 13:44:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.615 13:44:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.615 13:44:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.615 13:44:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.615 13:44:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.615 13:44:24 -- paths/export.sh@5 -- # export PATH 00:14:22.615 13:44:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.615 13:44:24 -- nvmf/common.sh@46 -- # : 0 00:14:22.615 13:44:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:22.615 13:44:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:22.615 13:44:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:22.615 13:44:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.615 13:44:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.615 13:44:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:22.615 13:44:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:22.615 13:44:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:22.615 13:44:24 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:22.615 13:44:24 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:22.615 13:44:24 -- target/abort.sh@14 -- # nvmftestinit 00:14:22.615 13:44:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:22.615 13:44:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.615 13:44:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:22.615 13:44:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:22.615 13:44:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:22.615 13:44:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.615 13:44:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.615 13:44:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.615 13:44:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:22.615 13:44:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:22.615 13:44:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:22.615 13:44:24 -- common/autotest_common.sh@10 -- # set +x 00:14:27.887 13:44:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:27.887 13:44:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:27.887 13:44:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:27.887 13:44:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:27.887 13:44:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:27.887 13:44:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:27.887 13:44:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:27.887 13:44:29 -- nvmf/common.sh@294 -- # net_devs=() 00:14:27.887 13:44:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:27.887 13:44:29 -- nvmf/common.sh@295 -- # e810=() 00:14:27.887 13:44:29 -- nvmf/common.sh@295 -- # local -ga e810 00:14:27.887 13:44:29 -- nvmf/common.sh@296 -- # x722=() 00:14:27.887 13:44:29 -- nvmf/common.sh@296 -- # local -ga x722 00:14:27.887 13:44:29 -- nvmf/common.sh@297 -- # mlx=() 00:14:27.887 13:44:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:27.887 13:44:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.887 13:44:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.887 13:44:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.887 13:44:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.887 13:44:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.887 13:44:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.887 13:44:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.887 13:44:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.887 13:44:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.887 13:44:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.887 13:44:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.887 13:44:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:27.887 13:44:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:27.887 13:44:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:27.887 13:44:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:27.887 13:44:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:27.887 13:44:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:27.888 13:44:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:27.888 13:44:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:27.888 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:27.888 13:44:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:27.888 13:44:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:27.888 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:27.888 13:44:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:27.888 13:44:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:27.888 13:44:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.888 13:44:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:27.888 13:44:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.888 13:44:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:27.888 Found net devices under 0000:86:00.0: cvl_0_0 00:14:27.888 13:44:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.888 13:44:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:27.888 13:44:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.888 13:44:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:27.888 13:44:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.888 13:44:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:27.888 Found net devices under 0000:86:00.1: cvl_0_1 00:14:27.888 13:44:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.888 13:44:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:27.888 13:44:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:27.888 13:44:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:27.888 13:44:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.888 13:44:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.888 13:44:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.888 13:44:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:27.888 13:44:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.888 13:44:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.888 13:44:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:27.888 13:44:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.888 13:44:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.888 13:44:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:27.888 13:44:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:27.888 13:44:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.888 13:44:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.888 13:44:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.888 13:44:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.888 13:44:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:27.888 13:44:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.888 13:44:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.888 13:44:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.888 13:44:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:27.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:14:27.888 00:14:27.888 --- 10.0.0.2 ping statistics --- 00:14:27.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.888 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:27.888 13:44:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:14:27.888 00:14:27.888 --- 10.0.0.1 ping statistics --- 00:14:27.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.888 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:14:27.888 13:44:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.888 13:44:29 -- nvmf/common.sh@410 -- # return 0 00:14:27.888 13:44:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:27.888 13:44:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.888 13:44:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:27.888 13:44:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.888 13:44:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:27.888 13:44:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:27.888 13:44:29 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:27.888 13:44:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:27.888 13:44:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:27.888 13:44:29 -- common/autotest_common.sh@10 -- # set +x 00:14:27.888 13:44:29 -- nvmf/common.sh@469 -- # nvmfpid=1525384 00:14:27.888 13:44:29 -- nvmf/common.sh@470 -- # waitforlisten 1525384 00:14:27.888 13:44:29 -- common/autotest_common.sh@819 -- # '[' -z 1525384 ']' 00:14:27.888 13:44:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.888 13:44:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:27.888 13:44:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.888 13:44:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:27.888 13:44:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:27.888 13:44:29 -- common/autotest_common.sh@10 -- # set +x 00:14:27.888 [2024-07-11 13:44:29.671417] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:27.888 [2024-07-11 13:44:29.671464] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.888 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.888 [2024-07-11 13:44:29.730073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:27.888 [2024-07-11 13:44:29.769221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:27.888 [2024-07-11 13:44:29.769331] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.888 [2024-07-11 13:44:29.769339] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.888 [2024-07-11 13:44:29.769346] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.888 [2024-07-11 13:44:29.769445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.888 [2024-07-11 13:44:29.769532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.888 [2024-07-11 13:44:29.769533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.148 13:44:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:28.148 13:44:30 -- common/autotest_common.sh@852 -- # return 0 00:14:28.148 13:44:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:28.148 13:44:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:28.148 13:44:30 -- common/autotest_common.sh@10 -- # set +x 00:14:28.148 13:44:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.148 13:44:30 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:28.148 13:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.148 13:44:30 -- common/autotest_common.sh@10 -- # set +x 00:14:28.148 [2024-07-11 13:44:30.501612] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.148 13:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.148 13:44:30 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:28.148 13:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.148 13:44:30 -- common/autotest_common.sh@10 -- # set +x 00:14:28.148 Malloc0 00:14:28.148 13:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.148 13:44:30 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:28.148 13:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.148 13:44:30 -- common/autotest_common.sh@10 -- # set +x 00:14:28.148 Delay0 00:14:28.148 13:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.148 13:44:30 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:28.148 13:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.148 13:44:30 -- common/autotest_common.sh@10 -- # set +x 00:14:28.148 13:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.148 13:44:30 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:28.148 13:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.148 13:44:30 -- common/autotest_common.sh@10 -- # set +x 00:14:28.148 13:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.148 13:44:30 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:28.148 13:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.148 13:44:30 -- common/autotest_common.sh@10 -- # set +x 00:14:28.148 [2024-07-11 13:44:30.559937] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.148 13:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.148 13:44:30 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:28.148 13:44:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.148 13:44:30 -- common/autotest_common.sh@10 -- # set +x 00:14:28.148 13:44:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.148 13:44:30 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:28.148 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.407 [2024-07-11 13:44:30.665941] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:30.314 Initializing NVMe Controllers 00:14:30.314 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:30.314 controller IO queue size 128 less than required 00:14:30.314 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:30.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:30.314 Initialization complete. Launching workers. 00:14:30.314 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42826 00:14:30.314 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42887, failed to submit 62 00:14:30.314 success 42826, unsuccess 61, failed 0 00:14:30.314 13:44:32 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:30.314 13:44:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.314 13:44:32 -- common/autotest_common.sh@10 -- # set +x 00:14:30.314 13:44:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.314 13:44:32 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:30.314 13:44:32 -- target/abort.sh@38 -- # nvmftestfini 00:14:30.314 13:44:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:30.314 13:44:32 -- nvmf/common.sh@116 -- # sync 00:14:30.314 13:44:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:30.314 13:44:32 -- nvmf/common.sh@119 -- # set +e 00:14:30.314 13:44:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:30.314 13:44:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:30.314 rmmod nvme_tcp 00:14:30.574 rmmod nvme_fabrics 00:14:30.574 rmmod nvme_keyring 00:14:30.574 13:44:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:30.574 13:44:32 -- nvmf/common.sh@123 -- # set -e 00:14:30.574 13:44:32 -- nvmf/common.sh@124 -- # return 0 00:14:30.574 13:44:32 -- nvmf/common.sh@477 -- # '[' -n 1525384 ']' 00:14:30.574 13:44:32 -- nvmf/common.sh@478 -- # killprocess 1525384 00:14:30.574 13:44:32 -- common/autotest_common.sh@926 -- # '[' -z 1525384 ']' 00:14:30.574 13:44:32 -- common/autotest_common.sh@930 -- # kill -0 1525384 00:14:30.574 13:44:32 -- common/autotest_common.sh@931 -- # uname 00:14:30.574 13:44:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:30.574 13:44:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1525384 00:14:30.574 13:44:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:30.574 13:44:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:30.574 13:44:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1525384' 00:14:30.574 killing process with pid 1525384 00:14:30.574 13:44:32 -- common/autotest_common.sh@945 -- # kill 1525384 00:14:30.574 13:44:32 -- common/autotest_common.sh@950 -- # wait 1525384 00:14:30.833 13:44:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:30.833 13:44:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:30.833 13:44:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:30.833 13:44:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.833 13:44:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:30.833 13:44:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.833 13:44:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.833 13:44:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.830 13:44:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:32.830 00:14:32.830 real 0m10.338s 00:14:32.830 user 0m12.580s 00:14:32.830 sys 0m4.554s 00:14:32.830 13:44:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.830 13:44:35 -- common/autotest_common.sh@10 -- # set +x 00:14:32.830 ************************************ 00:14:32.830 END TEST nvmf_abort 00:14:32.830 ************************************ 00:14:32.830 13:44:35 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:32.830 13:44:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:32.830 13:44:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:32.830 13:44:35 -- common/autotest_common.sh@10 -- # set +x 00:14:32.830 ************************************ 00:14:32.830 START TEST nvmf_ns_hotplug_stress 00:14:32.830 ************************************ 00:14:32.830 13:44:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:32.830 * Looking for test storage... 00:14:32.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:32.830 13:44:35 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.830 13:44:35 -- nvmf/common.sh@7 -- # uname -s 00:14:32.830 13:44:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.830 13:44:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.830 13:44:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.830 13:44:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.830 13:44:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.830 13:44:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.830 13:44:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.830 13:44:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.830 13:44:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.830 13:44:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.830 13:44:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:32.830 13:44:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:32.830 13:44:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.830 13:44:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.830 13:44:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.830 13:44:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:32.830 13:44:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.830 13:44:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.830 13:44:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.830 13:44:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.830 13:44:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.830 13:44:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.830 13:44:35 -- paths/export.sh@5 -- # export PATH 00:14:32.830 13:44:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.830 13:44:35 -- nvmf/common.sh@46 -- # : 0 00:14:32.830 13:44:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:32.830 13:44:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:32.830 13:44:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:32.830 13:44:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.830 13:44:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.830 13:44:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:32.830 13:44:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:32.830 13:44:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:32.830 13:44:35 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.830 13:44:35 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:32.830 13:44:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:32.830 13:44:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.830 13:44:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:32.830 13:44:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:32.830 13:44:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:32.830 13:44:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.830 13:44:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.830 13:44:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.830 13:44:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:32.830 13:44:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:32.830 13:44:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:32.830 13:44:35 -- common/autotest_common.sh@10 -- # set +x 00:14:38.108 13:44:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:38.108 13:44:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:38.108 13:44:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:38.108 13:44:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:38.108 13:44:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:38.108 13:44:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:38.108 13:44:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:38.108 13:44:40 -- nvmf/common.sh@294 -- # net_devs=() 00:14:38.108 13:44:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:38.108 13:44:40 -- nvmf/common.sh@295 -- # e810=() 00:14:38.108 13:44:40 -- nvmf/common.sh@295 -- # local -ga e810 00:14:38.108 13:44:40 -- nvmf/common.sh@296 -- # x722=() 00:14:38.108 13:44:40 -- nvmf/common.sh@296 -- # local -ga x722 00:14:38.108 13:44:40 -- nvmf/common.sh@297 -- # mlx=() 00:14:38.108 13:44:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:38.108 13:44:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.108 13:44:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.108 13:44:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.108 13:44:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.108 13:44:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.108 13:44:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.108 13:44:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.108 13:44:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.108 13:44:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.108 13:44:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.108 13:44:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.108 13:44:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:38.108 13:44:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:38.108 13:44:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:38.108 13:44:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:38.108 13:44:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:38.108 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:38.108 13:44:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:38.108 13:44:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:38.108 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:38.108 13:44:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:38.108 13:44:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:38.108 13:44:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.108 13:44:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:38.108 13:44:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.108 13:44:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:38.108 Found net devices under 0000:86:00.0: cvl_0_0 00:14:38.108 13:44:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.108 13:44:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:38.108 13:44:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.108 13:44:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:38.108 13:44:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.108 13:44:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:38.108 Found net devices under 0000:86:00.1: cvl_0_1 00:14:38.108 13:44:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.108 13:44:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:38.108 13:44:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:38.108 13:44:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:38.108 13:44:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:38.108 13:44:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.108 13:44:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.108 13:44:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.108 13:44:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:38.108 13:44:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.108 13:44:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.108 13:44:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:38.108 13:44:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.108 13:44:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.108 13:44:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:38.108 13:44:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:38.108 13:44:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.368 13:44:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.368 13:44:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.368 13:44:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.368 13:44:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:38.368 13:44:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.368 13:44:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.368 13:44:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.368 13:44:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:38.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:14:38.368 00:14:38.368 --- 10.0.0.2 ping statistics --- 00:14:38.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.368 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:38.368 13:44:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:14:38.368 00:14:38.368 --- 10.0.0.1 ping statistics --- 00:14:38.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.368 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:14:38.368 13:44:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.368 13:44:40 -- nvmf/common.sh@410 -- # return 0 00:14:38.368 13:44:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:38.368 13:44:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.368 13:44:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:38.368 13:44:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:38.368 13:44:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.368 13:44:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:38.368 13:44:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:38.368 13:44:40 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:38.368 13:44:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:38.368 13:44:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:38.368 13:44:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.368 13:44:40 -- nvmf/common.sh@469 -- # nvmfpid=1529415 00:14:38.368 13:44:40 -- nvmf/common.sh@470 -- # waitforlisten 1529415 00:14:38.368 13:44:40 -- common/autotest_common.sh@819 -- # '[' -z 1529415 ']' 00:14:38.368 13:44:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.368 13:44:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:38.368 13:44:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.368 13:44:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:38.368 13:44:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:38.368 13:44:40 -- common/autotest_common.sh@10 -- # set +x 00:14:38.628 [2024-07-11 13:44:40.870059] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:38.628 [2024-07-11 13:44:40.870108] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.628 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.628 [2024-07-11 13:44:40.928884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:38.628 [2024-07-11 13:44:40.968956] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:38.628 [2024-07-11 13:44:40.969065] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.628 [2024-07-11 13:44:40.969072] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.628 [2024-07-11 13:44:40.969078] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.628 [2024-07-11 13:44:40.969190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.628 [2024-07-11 13:44:40.969278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.628 [2024-07-11 13:44:40.969280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.566 13:44:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:39.566 13:44:41 -- common/autotest_common.sh@852 -- # return 0 00:14:39.566 13:44:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:39.566 13:44:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:39.566 13:44:41 -- common/autotest_common.sh@10 -- # set +x 00:14:39.566 13:44:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.566 13:44:41 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:39.566 13:44:41 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:39.566 [2024-07-11 13:44:41.858041] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.566 13:44:41 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:39.825 13:44:42 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.825 [2024-07-11 13:44:42.207362] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.825 13:44:42 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:40.084 13:44:42 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:40.343 Malloc0 00:14:40.343 13:44:42 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:40.343 Delay0 00:14:40.603 13:44:42 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.603 13:44:42 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:40.863 NULL1 00:14:40.863 13:44:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:40.863 13:44:43 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:40.863 13:44:43 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1529907 00:14:40.863 13:44:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:40.863 13:44:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.122 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.122 13:44:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.380 13:44:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:41.380 13:44:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:41.638 true 00:14:41.638 13:44:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:41.638 13:44:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.638 13:44:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.897 13:44:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:41.897 13:44:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:42.155 true 00:14:42.155 13:44:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:42.155 13:44:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.155 13:44:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.414 13:44:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:42.414 13:44:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:42.672 true 00:14:42.672 13:44:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:42.672 13:44:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.930 13:44:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.930 13:44:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:42.930 13:44:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:43.188 true 00:14:43.188 13:44:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:43.188 13:44:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.446 13:44:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.703 13:44:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:43.703 13:44:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:43.703 true 00:14:43.703 13:44:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:43.703 13:44:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.961 13:44:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.218 13:44:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:44.218 13:44:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:44.476 true 00:14:44.476 13:44:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:44.476 13:44:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.476 13:44:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.734 13:44:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:44.734 13:44:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:44.992 true 00:14:44.992 13:44:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:44.992 13:44:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.251 13:44:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.251 13:44:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:45.251 13:44:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:45.508 true 00:14:45.508 13:44:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:45.508 13:44:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.767 13:44:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.026 13:44:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:46.026 13:44:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:46.026 true 00:14:46.026 13:44:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:46.026 13:44:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.285 13:44:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.543 13:44:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:46.543 13:44:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:46.802 true 00:14:46.802 13:44:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:46.802 13:44:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.802 13:44:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.061 13:44:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:47.061 13:44:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:47.320 true 00:14:47.320 13:44:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:47.320 13:44:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.579 13:44:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.579 13:44:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:47.579 13:44:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:47.838 true 00:14:47.838 13:44:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:47.838 13:44:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.097 13:44:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.355 13:44:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:48.355 13:44:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:48.355 true 00:14:48.614 13:44:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:48.614 13:44:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.614 13:44:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.873 13:44:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:48.873 13:44:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:49.132 true 00:14:49.132 13:44:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:49.132 13:44:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.132 13:44:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.391 13:44:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:49.391 13:44:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:49.649 true 00:14:49.649 13:44:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:49.649 13:44:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.907 13:44:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.907 13:44:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:49.907 13:44:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:50.164 true 00:14:50.164 13:44:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:50.165 13:44:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.422 13:44:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.680 13:44:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:50.680 13:44:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:50.680 true 00:14:50.680 13:44:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:50.680 13:44:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.938 13:44:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.196 13:44:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:51.196 13:44:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:51.196 true 00:14:51.454 13:44:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:51.454 13:44:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.454 13:44:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.744 13:44:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:51.744 13:44:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:52.002 true 00:14:52.002 13:44:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:52.002 13:44:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.002 13:44:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.261 13:44:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:52.261 13:44:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:52.519 true 00:14:52.519 13:44:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:52.519 13:44:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.777 13:44:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.777 13:44:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:52.777 13:44:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:53.036 true 00:14:53.036 13:44:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:53.036 13:44:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.294 13:44:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.294 13:44:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:53.294 13:44:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:53.553 true 00:14:53.553 13:44:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:53.553 13:44:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.812 13:44:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.070 13:44:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:54.070 13:44:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:54.070 true 00:14:54.070 13:44:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:54.070 13:44:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.328 13:44:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.587 13:44:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:54.587 13:44:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:54.846 true 00:14:54.846 13:44:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:54.846 13:44:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.846 13:44:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.104 13:44:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:55.104 13:44:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:55.362 true 00:14:55.362 13:44:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:55.362 13:44:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.620 13:44:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.879 13:44:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:55.879 13:44:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:55.879 true 00:14:55.879 13:44:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:55.879 13:44:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.138 13:44:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.397 13:44:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:56.397 13:44:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:56.397 true 00:14:56.397 13:44:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:56.397 13:44:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.655 13:44:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.913 13:44:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:56.913 13:44:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:56.913 true 00:14:56.913 13:44:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:56.913 13:44:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.172 13:44:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.432 13:44:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:57.432 13:44:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:57.691 true 00:14:57.691 13:44:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:57.691 13:44:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.691 13:45:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.950 13:45:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:57.950 13:45:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:58.209 true 00:14:58.209 13:45:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:58.209 13:45:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.468 13:45:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.468 13:45:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:58.468 13:45:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:58.727 true 00:14:58.727 13:45:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:58.727 13:45:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.986 13:45:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.245 13:45:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:59.245 13:45:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:59.245 true 00:14:59.245 13:45:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:14:59.245 13:45:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.504 13:45:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.763 13:45:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:59.763 13:45:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:15:00.022 true 00:15:00.022 13:45:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:00.022 13:45:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.022 13:45:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.280 13:45:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:15:00.280 13:45:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:15:00.539 true 00:15:00.539 13:45:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:00.539 13:45:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.798 13:45:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.798 13:45:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:15:00.798 13:45:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:15:01.057 true 00:15:01.057 13:45:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:01.057 13:45:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.314 13:45:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.314 13:45:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:15:01.314 13:45:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:15:01.572 true 00:15:01.572 13:45:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:01.572 13:45:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.830 13:45:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.089 13:45:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:15:02.089 13:45:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:15:02.089 true 00:15:02.089 13:45:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:02.090 13:45:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.348 13:45:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.607 13:45:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:15:02.607 13:45:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:15:02.607 true 00:15:02.867 13:45:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:02.867 13:45:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.867 13:45:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.127 13:45:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:15:03.127 13:45:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:15:03.386 true 00:15:03.386 13:45:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:03.386 13:45:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.386 13:45:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.646 13:45:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:15:03.646 13:45:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:15:03.905 true 00:15:03.905 13:45:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:03.905 13:45:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.164 13:45:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.164 13:45:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:15:04.164 13:45:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:15:04.424 true 00:15:04.424 13:45:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:04.424 13:45:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.683 13:45:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.683 13:45:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:15:04.683 13:45:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:15:04.942 true 00:15:04.942 13:45:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:04.942 13:45:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.201 13:45:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.460 13:45:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:15:05.460 13:45:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:15:05.460 true 00:15:05.460 13:45:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:05.460 13:45:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.719 13:45:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.978 13:45:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:15:05.978 13:45:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:15:05.978 true 00:15:06.237 13:45:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:06.237 13:45:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.237 13:45:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.496 13:45:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:15:06.496 13:45:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:15:06.756 true 00:15:06.756 13:45:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:06.756 13:45:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.756 13:45:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:07.015 13:45:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:15:07.015 13:45:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:15:07.275 true 00:15:07.275 13:45:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:07.275 13:45:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.603 13:45:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:07.603 13:45:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:15:07.603 13:45:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:15:07.889 true 00:15:07.889 13:45:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:07.889 13:45:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.889 13:45:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.148 13:45:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:15:08.148 13:45:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:15:08.408 true 00:15:08.408 13:45:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:08.408 13:45:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.408 13:45:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.667 13:45:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:15:08.667 13:45:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:15:08.927 true 00:15:08.927 13:45:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:08.927 13:45:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.186 13:45:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.186 13:45:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:15:09.186 13:45:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:15:09.445 true 00:15:09.445 13:45:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:09.445 13:45:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.704 13:45:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.963 13:45:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:15:09.963 13:45:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:15:09.963 true 00:15:09.963 13:45:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:09.963 13:45:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.222 13:45:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.481 13:45:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:15:10.481 13:45:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:15:10.481 true 00:15:10.740 13:45:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:10.740 13:45:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.740 13:45:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.999 13:45:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:15:10.999 13:45:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:15:11.258 true 00:15:11.258 13:45:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:11.258 13:45:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.258 Initializing NVMe Controllers 00:15:11.258 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:11.258 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:15:11.258 Controller IO queue size 128, less than required. 00:15:11.258 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:11.258 WARNING: Some requested NVMe devices were skipped 00:15:11.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:11.258 Initialization complete. Launching workers. 00:15:11.258 ======================================================== 00:15:11.258 Latency(us) 00:15:11.258 Device Information : IOPS MiB/s Average min max 00:15:11.258 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29224.83 14.27 4379.72 1565.70 7845.49 00:15:11.258 ======================================================== 00:15:11.258 Total : 29224.83 14.27 4379.72 1565.70 7845.49 00:15:11.258 00:15:11.515 13:45:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.515 13:45:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:15:11.515 13:45:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:15:11.773 true 00:15:11.773 13:45:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1529907 00:15:11.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1529907) - No such process 00:15:11.773 13:45:14 -- target/ns_hotplug_stress.sh@53 -- # wait 1529907 00:15:11.773 13:45:14 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.032 13:45:14 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:12.032 13:45:14 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:12.032 13:45:14 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:12.032 13:45:14 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:12.032 13:45:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:12.032 13:45:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:12.290 null0 00:15:12.290 13:45:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:12.290 13:45:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:12.290 13:45:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:12.548 null1 00:15:12.548 13:45:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:12.548 13:45:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:12.548 13:45:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:12.548 null2 00:15:12.548 13:45:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:12.548 13:45:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:12.548 13:45:14 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:12.807 null3 00:15:12.807 13:45:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:12.807 13:45:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:12.807 13:45:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:13.067 null4 00:15:13.067 13:45:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:13.067 13:45:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:13.067 13:45:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:13.067 null5 00:15:13.067 13:45:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:13.067 13:45:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:13.067 13:45:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:13.326 null6 00:15:13.326 13:45:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:13.326 13:45:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:13.326 13:45:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:13.585 null7 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@66 -- # wait 1536183 1536184 1536186 1536188 1536190 1536192 1536193 1536195 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.585 13:45:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.845 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:14.104 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:14.104 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:14.104 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.104 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:14.104 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:14.104 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:14.104 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:14.104 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.362 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.363 13:45:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:14.621 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.621 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.622 13:45:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:14.880 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:14.880 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:14.880 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.880 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:14.880 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:14.880 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:14.880 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:14.880 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.880 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.880 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.880 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:14.880 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.881 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:15.139 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.397 13:45:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:15.656 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:15.656 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:15.656 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.656 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:15.656 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:15.656 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:15.656 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:15.656 13:45:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.656 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:15.915 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:15.915 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.915 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:15.915 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:15.915 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:15.915 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:15.915 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:15.915 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:16.174 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.432 13:45:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:16.691 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:16.691 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.691 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:16.691 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:16.691 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:16.691 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:16.691 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:16.691 13:45:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:16.948 13:45:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:17.206 13:45:19 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:17.206 13:45:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:17.206 13:45:19 -- nvmf/common.sh@116 -- # sync 00:15:17.206 13:45:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:17.206 13:45:19 -- nvmf/common.sh@119 -- # set +e 00:15:17.206 13:45:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:17.206 13:45:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:17.206 rmmod nvme_tcp 00:15:17.206 rmmod nvme_fabrics 00:15:17.206 rmmod nvme_keyring 00:15:17.206 13:45:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:17.206 13:45:19 -- nvmf/common.sh@123 -- # set -e 00:15:17.206 13:45:19 -- nvmf/common.sh@124 -- # return 0 00:15:17.206 13:45:19 -- nvmf/common.sh@477 -- # '[' -n 1529415 ']' 00:15:17.206 13:45:19 -- nvmf/common.sh@478 -- # killprocess 1529415 00:15:17.206 13:45:19 -- common/autotest_common.sh@926 -- # '[' -z 1529415 ']' 00:15:17.206 13:45:19 -- common/autotest_common.sh@930 -- # kill -0 1529415 00:15:17.206 13:45:19 -- common/autotest_common.sh@931 -- # uname 00:15:17.206 13:45:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:17.206 13:45:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1529415 00:15:17.465 13:45:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:17.465 13:45:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:17.465 13:45:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1529415' 00:15:17.465 killing process with pid 1529415 00:15:17.465 13:45:19 -- common/autotest_common.sh@945 -- # kill 1529415 00:15:17.465 13:45:19 -- common/autotest_common.sh@950 -- # wait 1529415 00:15:17.465 13:45:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:17.465 13:45:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:17.465 13:45:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:17.465 13:45:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.465 13:45:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:17.465 13:45:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.465 13:45:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.465 13:45:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.995 13:45:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:19.995 00:15:19.995 real 0m46.751s 00:15:19.995 user 3m16.800s 00:15:19.995 sys 0m16.854s 00:15:19.995 13:45:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.995 13:45:21 -- common/autotest_common.sh@10 -- # set +x 00:15:19.995 ************************************ 00:15:19.995 END TEST nvmf_ns_hotplug_stress 00:15:19.995 ************************************ 00:15:19.995 13:45:21 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:19.995 13:45:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:19.995 13:45:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:19.995 13:45:21 -- common/autotest_common.sh@10 -- # set +x 00:15:19.995 ************************************ 00:15:19.995 START TEST nvmf_connect_stress 00:15:19.995 ************************************ 00:15:19.995 13:45:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:19.995 * Looking for test storage... 00:15:19.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:19.995 13:45:22 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.995 13:45:22 -- nvmf/common.sh@7 -- # uname -s 00:15:19.995 13:45:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.995 13:45:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.995 13:45:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.995 13:45:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.995 13:45:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.995 13:45:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.995 13:45:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.995 13:45:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.995 13:45:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.995 13:45:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.995 13:45:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:19.995 13:45:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:19.995 13:45:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.995 13:45:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.995 13:45:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:19.995 13:45:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:19.995 13:45:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.995 13:45:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.995 13:45:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.995 13:45:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.995 13:45:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.996 13:45:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.996 13:45:22 -- paths/export.sh@5 -- # export PATH 00:15:19.996 13:45:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.996 13:45:22 -- nvmf/common.sh@46 -- # : 0 00:15:19.996 13:45:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:19.996 13:45:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:19.996 13:45:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:19.996 13:45:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.996 13:45:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.996 13:45:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:19.996 13:45:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:19.996 13:45:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:19.996 13:45:22 -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:19.996 13:45:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:19.996 13:45:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.996 13:45:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:19.996 13:45:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:19.996 13:45:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:19.996 13:45:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.996 13:45:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.996 13:45:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.996 13:45:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:19.996 13:45:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:19.996 13:45:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:19.996 13:45:22 -- common/autotest_common.sh@10 -- # set +x 00:15:25.266 13:45:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:25.266 13:45:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:25.266 13:45:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:25.266 13:45:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:25.266 13:45:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:25.266 13:45:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:25.266 13:45:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:25.266 13:45:27 -- nvmf/common.sh@294 -- # net_devs=() 00:15:25.266 13:45:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:25.266 13:45:27 -- nvmf/common.sh@295 -- # e810=() 00:15:25.266 13:45:27 -- nvmf/common.sh@295 -- # local -ga e810 00:15:25.266 13:45:27 -- nvmf/common.sh@296 -- # x722=() 00:15:25.267 13:45:27 -- nvmf/common.sh@296 -- # local -ga x722 00:15:25.267 13:45:27 -- nvmf/common.sh@297 -- # mlx=() 00:15:25.267 13:45:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:25.267 13:45:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.267 13:45:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.267 13:45:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.267 13:45:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.267 13:45:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.267 13:45:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.267 13:45:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.267 13:45:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.267 13:45:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.267 13:45:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.267 13:45:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.267 13:45:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:25.267 13:45:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:25.267 13:45:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:25.267 13:45:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:25.267 13:45:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:25.267 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:25.267 13:45:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:25.267 13:45:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:25.267 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:25.267 13:45:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:25.267 13:45:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:25.267 13:45:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.267 13:45:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:25.267 13:45:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.267 13:45:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:25.267 Found net devices under 0000:86:00.0: cvl_0_0 00:15:25.267 13:45:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.267 13:45:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:25.267 13:45:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.267 13:45:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:25.267 13:45:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.267 13:45:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:25.267 Found net devices under 0000:86:00.1: cvl_0_1 00:15:25.267 13:45:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.267 13:45:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:25.267 13:45:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:25.267 13:45:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:25.267 13:45:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.267 13:45:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.267 13:45:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.267 13:45:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:25.267 13:45:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.267 13:45:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.267 13:45:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:25.267 13:45:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.267 13:45:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.267 13:45:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:25.267 13:45:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:25.267 13:45:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.267 13:45:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.267 13:45:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.267 13:45:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.267 13:45:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:25.267 13:45:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.267 13:45:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.267 13:45:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.267 13:45:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:25.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:15:25.267 00:15:25.267 --- 10.0.0.2 ping statistics --- 00:15:25.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.267 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:15:25.267 13:45:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:15:25.267 00:15:25.267 --- 10.0.0.1 ping statistics --- 00:15:25.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.267 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:15:25.267 13:45:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.267 13:45:27 -- nvmf/common.sh@410 -- # return 0 00:15:25.267 13:45:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:25.267 13:45:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.267 13:45:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:25.267 13:45:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.267 13:45:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:25.267 13:45:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:25.267 13:45:27 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:25.267 13:45:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:25.267 13:45:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:25.267 13:45:27 -- common/autotest_common.sh@10 -- # set +x 00:15:25.267 13:45:27 -- nvmf/common.sh@469 -- # nvmfpid=1540355 00:15:25.267 13:45:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:25.267 13:45:27 -- nvmf/common.sh@470 -- # waitforlisten 1540355 00:15:25.267 13:45:27 -- common/autotest_common.sh@819 -- # '[' -z 1540355 ']' 00:15:25.267 13:45:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.267 13:45:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:25.267 13:45:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.267 13:45:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:25.267 13:45:27 -- common/autotest_common.sh@10 -- # set +x 00:15:25.267 [2024-07-11 13:45:27.445573] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:25.267 [2024-07-11 13:45:27.445612] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.267 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.267 [2024-07-11 13:45:27.503636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:25.267 [2024-07-11 13:45:27.541462] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:25.267 [2024-07-11 13:45:27.541577] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.267 [2024-07-11 13:45:27.541585] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.267 [2024-07-11 13:45:27.541592] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.267 [2024-07-11 13:45:27.541701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.267 [2024-07-11 13:45:27.541785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.267 [2024-07-11 13:45:27.541787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.835 13:45:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:25.835 13:45:28 -- common/autotest_common.sh@852 -- # return 0 00:15:25.835 13:45:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:25.835 13:45:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:25.835 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:15:25.835 13:45:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.835 13:45:28 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.835 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.835 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:15:26.094 [2024-07-11 13:45:28.297246] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.094 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.094 13:45:28 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:26.094 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.094 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:15:26.094 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.094 13:45:28 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.094 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.094 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:15:26.094 [2024-07-11 13:45:28.326282] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.094 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.094 13:45:28 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:26.094 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.094 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:15:26.094 NULL1 00:15:26.094 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.094 13:45:28 -- target/connect_stress.sh@21 -- # PERF_PID=1540606 00:15:26.094 13:45:28 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:26.094 13:45:28 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:26.094 13:45:28 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # seq 1 20 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.094 13:45:28 -- target/connect_stress.sh@28 -- # cat 00:15:26.094 13:45:28 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:26.094 13:45:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.094 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.094 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:15:26.353 13:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.353 13:45:28 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:26.353 13:45:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.353 13:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.353 13:45:28 -- common/autotest_common.sh@10 -- # set +x 00:15:26.611 13:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.611 13:45:29 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:26.611 13:45:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.611 13:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.611 13:45:29 -- common/autotest_common.sh@10 -- # set +x 00:15:27.178 13:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.178 13:45:29 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:27.178 13:45:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.178 13:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.178 13:45:29 -- common/autotest_common.sh@10 -- # set +x 00:15:27.436 13:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.436 13:45:29 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:27.436 13:45:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.436 13:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.436 13:45:29 -- common/autotest_common.sh@10 -- # set +x 00:15:27.697 13:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.697 13:45:30 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:27.697 13:45:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.697 13:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.697 13:45:30 -- common/autotest_common.sh@10 -- # set +x 00:15:28.022 13:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.022 13:45:30 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:28.022 13:45:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.022 13:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.022 13:45:30 -- common/autotest_common.sh@10 -- # set +x 00:15:28.280 13:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.280 13:45:30 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:28.280 13:45:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.280 13:45:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.280 13:45:30 -- common/autotest_common.sh@10 -- # set +x 00:15:28.847 13:45:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.847 13:45:31 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:28.847 13:45:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.847 13:45:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.847 13:45:31 -- common/autotest_common.sh@10 -- # set +x 00:15:29.106 13:45:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.106 13:45:31 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:29.106 13:45:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.106 13:45:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.106 13:45:31 -- common/autotest_common.sh@10 -- # set +x 00:15:29.364 13:45:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.364 13:45:31 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:29.364 13:45:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.364 13:45:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.364 13:45:31 -- common/autotest_common.sh@10 -- # set +x 00:15:29.623 13:45:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.623 13:45:31 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:29.623 13:45:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.623 13:45:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.623 13:45:31 -- common/autotest_common.sh@10 -- # set +x 00:15:29.882 13:45:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.882 13:45:32 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:29.882 13:45:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.882 13:45:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.882 13:45:32 -- common/autotest_common.sh@10 -- # set +x 00:15:30.447 13:45:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.447 13:45:32 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:30.447 13:45:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.447 13:45:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.447 13:45:32 -- common/autotest_common.sh@10 -- # set +x 00:15:30.705 13:45:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.705 13:45:32 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:30.705 13:45:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.705 13:45:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.705 13:45:32 -- common/autotest_common.sh@10 -- # set +x 00:15:30.964 13:45:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.964 13:45:33 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:30.964 13:45:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.964 13:45:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.964 13:45:33 -- common/autotest_common.sh@10 -- # set +x 00:15:31.222 13:45:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.222 13:45:33 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:31.222 13:45:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.222 13:45:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.222 13:45:33 -- common/autotest_common.sh@10 -- # set +x 00:15:31.480 13:45:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.480 13:45:33 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:31.480 13:45:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.480 13:45:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.480 13:45:33 -- common/autotest_common.sh@10 -- # set +x 00:15:32.047 13:45:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.047 13:45:34 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:32.047 13:45:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.047 13:45:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.047 13:45:34 -- common/autotest_common.sh@10 -- # set +x 00:15:32.305 13:45:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.305 13:45:34 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:32.305 13:45:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.305 13:45:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.305 13:45:34 -- common/autotest_common.sh@10 -- # set +x 00:15:32.564 13:45:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.564 13:45:34 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:32.564 13:45:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.564 13:45:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.564 13:45:34 -- common/autotest_common.sh@10 -- # set +x 00:15:32.822 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.822 13:45:35 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:32.822 13:45:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.822 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.822 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:15:33.389 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.389 13:45:35 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:33.389 13:45:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.389 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.389 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:15:33.648 13:45:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.648 13:45:35 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:33.648 13:45:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.648 13:45:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.648 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:15:33.906 13:45:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.906 13:45:36 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:33.906 13:45:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.906 13:45:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.906 13:45:36 -- common/autotest_common.sh@10 -- # set +x 00:15:34.165 13:45:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.165 13:45:36 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:34.165 13:45:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.165 13:45:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.165 13:45:36 -- common/autotest_common.sh@10 -- # set +x 00:15:34.424 13:45:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.424 13:45:36 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:34.424 13:45:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.424 13:45:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.424 13:45:36 -- common/autotest_common.sh@10 -- # set +x 00:15:34.992 13:45:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.992 13:45:37 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:34.992 13:45:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.992 13:45:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.992 13:45:37 -- common/autotest_common.sh@10 -- # set +x 00:15:35.251 13:45:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.251 13:45:37 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:35.251 13:45:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.251 13:45:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.251 13:45:37 -- common/autotest_common.sh@10 -- # set +x 00:15:35.509 13:45:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.509 13:45:37 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:35.509 13:45:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.509 13:45:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.509 13:45:37 -- common/autotest_common.sh@10 -- # set +x 00:15:35.767 13:45:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.767 13:45:38 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:35.767 13:45:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.767 13:45:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.767 13:45:38 -- common/autotest_common.sh@10 -- # set +x 00:15:36.026 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:36.026 13:45:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.026 13:45:38 -- target/connect_stress.sh@34 -- # kill -0 1540606 00:15:36.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1540606) - No such process 00:15:36.026 13:45:38 -- target/connect_stress.sh@38 -- # wait 1540606 00:15:36.026 13:45:38 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:36.284 13:45:38 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:36.284 13:45:38 -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:36.284 13:45:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:36.284 13:45:38 -- nvmf/common.sh@116 -- # sync 00:15:36.284 13:45:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:36.284 13:45:38 -- nvmf/common.sh@119 -- # set +e 00:15:36.284 13:45:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:36.285 13:45:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:36.285 rmmod nvme_tcp 00:15:36.285 rmmod nvme_fabrics 00:15:36.285 rmmod nvme_keyring 00:15:36.285 13:45:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:36.285 13:45:38 -- nvmf/common.sh@123 -- # set -e 00:15:36.285 13:45:38 -- nvmf/common.sh@124 -- # return 0 00:15:36.285 13:45:38 -- nvmf/common.sh@477 -- # '[' -n 1540355 ']' 00:15:36.285 13:45:38 -- nvmf/common.sh@478 -- # killprocess 1540355 00:15:36.285 13:45:38 -- common/autotest_common.sh@926 -- # '[' -z 1540355 ']' 00:15:36.285 13:45:38 -- common/autotest_common.sh@930 -- # kill -0 1540355 00:15:36.285 13:45:38 -- common/autotest_common.sh@931 -- # uname 00:15:36.285 13:45:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:36.285 13:45:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1540355 00:15:36.285 13:45:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:36.285 13:45:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:36.285 13:45:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1540355' 00:15:36.285 killing process with pid 1540355 00:15:36.285 13:45:38 -- common/autotest_common.sh@945 -- # kill 1540355 00:15:36.285 13:45:38 -- common/autotest_common.sh@950 -- # wait 1540355 00:15:36.543 13:45:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:36.543 13:45:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:36.543 13:45:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:36.543 13:45:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.543 13:45:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:36.543 13:45:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.543 13:45:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.543 13:45:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.447 13:45:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:38.447 00:15:38.447 real 0m18.878s 00:15:38.447 user 0m40.897s 00:15:38.447 sys 0m8.005s 00:15:38.447 13:45:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.447 13:45:40 -- common/autotest_common.sh@10 -- # set +x 00:15:38.447 ************************************ 00:15:38.447 END TEST nvmf_connect_stress 00:15:38.447 ************************************ 00:15:38.447 13:45:40 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:38.447 13:45:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:38.447 13:45:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:38.447 13:45:40 -- common/autotest_common.sh@10 -- # set +x 00:15:38.447 ************************************ 00:15:38.447 START TEST nvmf_fused_ordering 00:15:38.447 ************************************ 00:15:38.447 13:45:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:38.706 * Looking for test storage... 00:15:38.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.706 13:45:40 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.706 13:45:40 -- nvmf/common.sh@7 -- # uname -s 00:15:38.706 13:45:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.706 13:45:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.706 13:45:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.706 13:45:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.706 13:45:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.706 13:45:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.706 13:45:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.707 13:45:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.707 13:45:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.707 13:45:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.707 13:45:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.707 13:45:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.707 13:45:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.707 13:45:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.707 13:45:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.707 13:45:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.707 13:45:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.707 13:45:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.707 13:45:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.707 13:45:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.707 13:45:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.707 13:45:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.707 13:45:40 -- paths/export.sh@5 -- # export PATH 00:15:38.707 13:45:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.707 13:45:40 -- nvmf/common.sh@46 -- # : 0 00:15:38.707 13:45:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:38.707 13:45:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:38.707 13:45:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:38.707 13:45:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.707 13:45:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.707 13:45:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:38.707 13:45:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:38.707 13:45:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:38.707 13:45:40 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:38.707 13:45:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:38.707 13:45:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.707 13:45:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:38.707 13:45:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:38.707 13:45:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:38.707 13:45:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.707 13:45:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.707 13:45:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.707 13:45:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:38.707 13:45:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:38.707 13:45:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:38.707 13:45:40 -- common/autotest_common.sh@10 -- # set +x 00:15:43.981 13:45:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:43.981 13:45:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:43.981 13:45:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:43.981 13:45:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:43.981 13:45:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:43.981 13:45:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:43.981 13:45:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:43.981 13:45:46 -- nvmf/common.sh@294 -- # net_devs=() 00:15:43.981 13:45:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:43.981 13:45:46 -- nvmf/common.sh@295 -- # e810=() 00:15:43.981 13:45:46 -- nvmf/common.sh@295 -- # local -ga e810 00:15:43.981 13:45:46 -- nvmf/common.sh@296 -- # x722=() 00:15:43.981 13:45:46 -- nvmf/common.sh@296 -- # local -ga x722 00:15:43.981 13:45:46 -- nvmf/common.sh@297 -- # mlx=() 00:15:43.981 13:45:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:43.981 13:45:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.981 13:45:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.981 13:45:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.981 13:45:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.981 13:45:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.981 13:45:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.981 13:45:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.981 13:45:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.981 13:45:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.981 13:45:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.981 13:45:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.981 13:45:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:43.981 13:45:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:43.981 13:45:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:43.981 13:45:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:43.981 13:45:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:43.981 13:45:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:43.981 13:45:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:43.981 13:45:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:43.981 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:43.981 13:45:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:43.981 13:45:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:43.981 13:45:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.981 13:45:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.981 13:45:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:43.981 13:45:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:43.981 13:45:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:43.981 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:43.981 13:45:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:43.981 13:45:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:43.981 13:45:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.981 13:45:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.981 13:45:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:43.982 13:45:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:43.982 13:45:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:43.982 13:45:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:43.982 13:45:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:43.982 13:45:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.982 13:45:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:43.982 13:45:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.982 13:45:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:43.982 Found net devices under 0000:86:00.0: cvl_0_0 00:15:43.982 13:45:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.982 13:45:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:43.982 13:45:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.982 13:45:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:43.982 13:45:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.982 13:45:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:43.982 Found net devices under 0000:86:00.1: cvl_0_1 00:15:43.982 13:45:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.982 13:45:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:43.982 13:45:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:43.982 13:45:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:43.982 13:45:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:43.982 13:45:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:43.982 13:45:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.982 13:45:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.982 13:45:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:43.982 13:45:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:43.982 13:45:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:43.982 13:45:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:43.982 13:45:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:43.982 13:45:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:43.982 13:45:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.982 13:45:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:43.982 13:45:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:43.982 13:45:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:43.982 13:45:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:43.982 13:45:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:43.982 13:45:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:43.982 13:45:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:43.982 13:45:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:43.982 13:45:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:43.982 13:45:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:43.982 13:45:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:43.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:15:43.982 00:15:43.982 --- 10.0.0.2 ping statistics --- 00:15:43.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.982 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:15:43.982 13:45:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:43.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:15:43.982 00:15:43.982 --- 10.0.0.1 ping statistics --- 00:15:43.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.982 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:15:43.982 13:45:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.982 13:45:46 -- nvmf/common.sh@410 -- # return 0 00:15:43.982 13:45:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:43.982 13:45:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.982 13:45:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:43.982 13:45:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:43.982 13:45:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.982 13:45:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:43.982 13:45:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:43.982 13:45:46 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:43.982 13:45:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:43.982 13:45:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:43.982 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:15:43.982 13:45:46 -- nvmf/common.sh@469 -- # nvmfpid=1545800 00:15:43.982 13:45:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:43.982 13:45:46 -- nvmf/common.sh@470 -- # waitforlisten 1545800 00:15:43.982 13:45:46 -- common/autotest_common.sh@819 -- # '[' -z 1545800 ']' 00:15:43.982 13:45:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.982 13:45:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:43.982 13:45:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.982 13:45:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:43.982 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:15:44.240 [2024-07-11 13:45:46.465282] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:44.240 [2024-07-11 13:45:46.465323] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.240 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.240 [2024-07-11 13:45:46.522390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.240 [2024-07-11 13:45:46.558850] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:44.240 [2024-07-11 13:45:46.558966] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.240 [2024-07-11 13:45:46.558974] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.240 [2024-07-11 13:45:46.558980] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.240 [2024-07-11 13:45:46.559002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.807 13:45:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:44.807 13:45:47 -- common/autotest_common.sh@852 -- # return 0 00:15:44.807 13:45:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:44.807 13:45:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:44.807 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:15:45.066 13:45:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.066 13:45:47 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:45.066 13:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.066 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:15:45.066 [2024-07-11 13:45:47.284100] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.066 13:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.066 13:45:47 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:45.066 13:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.066 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:15:45.066 13:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.066 13:45:47 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.066 13:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.066 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:15:45.066 [2024-07-11 13:45:47.300271] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.066 13:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.066 13:45:47 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:45.066 13:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.066 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:15:45.066 NULL1 00:15:45.066 13:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.066 13:45:47 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:45.066 13:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.066 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:15:45.066 13:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.066 13:45:47 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:45.067 13:45:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.067 13:45:47 -- common/autotest_common.sh@10 -- # set +x 00:15:45.067 13:45:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.067 13:45:47 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:45.067 [2024-07-11 13:45:47.353447] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:45.067 [2024-07-11 13:45:47.353492] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545845 ] 00:15:45.067 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.325 Attached to nqn.2016-06.io.spdk:cnode1 00:15:45.325 Namespace ID: 1 size: 1GB 00:15:45.325 fused_ordering(0) 00:15:45.325 fused_ordering(1) 00:15:45.325 fused_ordering(2) 00:15:45.325 fused_ordering(3) 00:15:45.325 fused_ordering(4) 00:15:45.325 fused_ordering(5) 00:15:45.325 fused_ordering(6) 00:15:45.325 fused_ordering(7) 00:15:45.325 fused_ordering(8) 00:15:45.325 fused_ordering(9) 00:15:45.325 fused_ordering(10) 00:15:45.325 fused_ordering(11) 00:15:45.325 fused_ordering(12) 00:15:45.325 fused_ordering(13) 00:15:45.325 fused_ordering(14) 00:15:45.325 fused_ordering(15) 00:15:45.325 fused_ordering(16) 00:15:45.325 fused_ordering(17) 00:15:45.325 fused_ordering(18) 00:15:45.325 fused_ordering(19) 00:15:45.325 fused_ordering(20) 00:15:45.325 fused_ordering(21) 00:15:45.325 fused_ordering(22) 00:15:45.325 fused_ordering(23) 00:15:45.325 fused_ordering(24) 00:15:45.325 fused_ordering(25) 00:15:45.325 fused_ordering(26) 00:15:45.325 fused_ordering(27) 00:15:45.325 fused_ordering(28) 00:15:45.325 fused_ordering(29) 00:15:45.325 fused_ordering(30) 00:15:45.325 fused_ordering(31) 00:15:45.325 fused_ordering(32) 00:15:45.325 fused_ordering(33) 00:15:45.325 fused_ordering(34) 00:15:45.325 fused_ordering(35) 00:15:45.325 fused_ordering(36) 00:15:45.325 fused_ordering(37) 00:15:45.325 fused_ordering(38) 00:15:45.325 fused_ordering(39) 00:15:45.325 fused_ordering(40) 00:15:45.325 fused_ordering(41) 00:15:45.325 fused_ordering(42) 00:15:45.325 fused_ordering(43) 00:15:45.325 fused_ordering(44) 00:15:45.325 fused_ordering(45) 00:15:45.325 fused_ordering(46) 00:15:45.325 fused_ordering(47) 00:15:45.325 fused_ordering(48) 00:15:45.325 fused_ordering(49) 00:15:45.325 fused_ordering(50) 00:15:45.325 fused_ordering(51) 00:15:45.325 fused_ordering(52) 00:15:45.325 fused_ordering(53) 00:15:45.325 fused_ordering(54) 00:15:45.325 fused_ordering(55) 00:15:45.325 fused_ordering(56) 00:15:45.325 fused_ordering(57) 00:15:45.325 fused_ordering(58) 00:15:45.325 fused_ordering(59) 00:15:45.325 fused_ordering(60) 00:15:45.325 fused_ordering(61) 00:15:45.325 fused_ordering(62) 00:15:45.325 fused_ordering(63) 00:15:45.325 fused_ordering(64) 00:15:45.325 fused_ordering(65) 00:15:45.325 fused_ordering(66) 00:15:45.325 fused_ordering(67) 00:15:45.325 fused_ordering(68) 00:15:45.325 fused_ordering(69) 00:15:45.325 fused_ordering(70) 00:15:45.325 fused_ordering(71) 00:15:45.325 fused_ordering(72) 00:15:45.325 fused_ordering(73) 00:15:45.325 fused_ordering(74) 00:15:45.325 fused_ordering(75) 00:15:45.325 fused_ordering(76) 00:15:45.325 fused_ordering(77) 00:15:45.325 fused_ordering(78) 00:15:45.325 fused_ordering(79) 00:15:45.325 fused_ordering(80) 00:15:45.325 fused_ordering(81) 00:15:45.325 fused_ordering(82) 00:15:45.325 fused_ordering(83) 00:15:45.325 fused_ordering(84) 00:15:45.325 fused_ordering(85) 00:15:45.325 fused_ordering(86) 00:15:45.325 fused_ordering(87) 00:15:45.325 fused_ordering(88) 00:15:45.325 fused_ordering(89) 00:15:45.325 fused_ordering(90) 00:15:45.325 fused_ordering(91) 00:15:45.325 fused_ordering(92) 00:15:45.325 fused_ordering(93) 00:15:45.325 fused_ordering(94) 00:15:45.325 fused_ordering(95) 00:15:45.325 fused_ordering(96) 00:15:45.325 fused_ordering(97) 00:15:45.325 fused_ordering(98) 00:15:45.325 fused_ordering(99) 00:15:45.325 fused_ordering(100) 00:15:45.325 fused_ordering(101) 00:15:45.325 fused_ordering(102) 00:15:45.325 fused_ordering(103) 00:15:45.325 fused_ordering(104) 00:15:45.325 fused_ordering(105) 00:15:45.325 fused_ordering(106) 00:15:45.325 fused_ordering(107) 00:15:45.325 fused_ordering(108) 00:15:45.325 fused_ordering(109) 00:15:45.325 fused_ordering(110) 00:15:45.325 fused_ordering(111) 00:15:45.325 fused_ordering(112) 00:15:45.325 fused_ordering(113) 00:15:45.325 fused_ordering(114) 00:15:45.325 fused_ordering(115) 00:15:45.325 fused_ordering(116) 00:15:45.325 fused_ordering(117) 00:15:45.325 fused_ordering(118) 00:15:45.325 fused_ordering(119) 00:15:45.325 fused_ordering(120) 00:15:45.325 fused_ordering(121) 00:15:45.325 fused_ordering(122) 00:15:45.325 fused_ordering(123) 00:15:45.325 fused_ordering(124) 00:15:45.325 fused_ordering(125) 00:15:45.325 fused_ordering(126) 00:15:45.325 fused_ordering(127) 00:15:45.325 fused_ordering(128) 00:15:45.325 fused_ordering(129) 00:15:45.325 fused_ordering(130) 00:15:45.325 fused_ordering(131) 00:15:45.325 fused_ordering(132) 00:15:45.325 fused_ordering(133) 00:15:45.325 fused_ordering(134) 00:15:45.325 fused_ordering(135) 00:15:45.325 fused_ordering(136) 00:15:45.325 fused_ordering(137) 00:15:45.325 fused_ordering(138) 00:15:45.325 fused_ordering(139) 00:15:45.326 fused_ordering(140) 00:15:45.326 fused_ordering(141) 00:15:45.326 fused_ordering(142) 00:15:45.326 fused_ordering(143) 00:15:45.326 fused_ordering(144) 00:15:45.326 fused_ordering(145) 00:15:45.326 fused_ordering(146) 00:15:45.326 fused_ordering(147) 00:15:45.326 fused_ordering(148) 00:15:45.326 fused_ordering(149) 00:15:45.326 fused_ordering(150) 00:15:45.326 fused_ordering(151) 00:15:45.326 fused_ordering(152) 00:15:45.326 fused_ordering(153) 00:15:45.326 fused_ordering(154) 00:15:45.326 fused_ordering(155) 00:15:45.326 fused_ordering(156) 00:15:45.326 fused_ordering(157) 00:15:45.326 fused_ordering(158) 00:15:45.326 fused_ordering(159) 00:15:45.326 fused_ordering(160) 00:15:45.326 fused_ordering(161) 00:15:45.326 fused_ordering(162) 00:15:45.326 fused_ordering(163) 00:15:45.326 fused_ordering(164) 00:15:45.326 fused_ordering(165) 00:15:45.326 fused_ordering(166) 00:15:45.326 fused_ordering(167) 00:15:45.326 fused_ordering(168) 00:15:45.326 fused_ordering(169) 00:15:45.326 fused_ordering(170) 00:15:45.326 fused_ordering(171) 00:15:45.326 fused_ordering(172) 00:15:45.326 fused_ordering(173) 00:15:45.326 fused_ordering(174) 00:15:45.326 fused_ordering(175) 00:15:45.326 fused_ordering(176) 00:15:45.326 fused_ordering(177) 00:15:45.326 fused_ordering(178) 00:15:45.326 fused_ordering(179) 00:15:45.326 fused_ordering(180) 00:15:45.326 fused_ordering(181) 00:15:45.326 fused_ordering(182) 00:15:45.326 fused_ordering(183) 00:15:45.326 fused_ordering(184) 00:15:45.326 fused_ordering(185) 00:15:45.326 fused_ordering(186) 00:15:45.326 fused_ordering(187) 00:15:45.326 fused_ordering(188) 00:15:45.326 fused_ordering(189) 00:15:45.326 fused_ordering(190) 00:15:45.326 fused_ordering(191) 00:15:45.326 fused_ordering(192) 00:15:45.326 fused_ordering(193) 00:15:45.326 fused_ordering(194) 00:15:45.326 fused_ordering(195) 00:15:45.326 fused_ordering(196) 00:15:45.326 fused_ordering(197) 00:15:45.326 fused_ordering(198) 00:15:45.326 fused_ordering(199) 00:15:45.326 fused_ordering(200) 00:15:45.326 fused_ordering(201) 00:15:45.326 fused_ordering(202) 00:15:45.326 fused_ordering(203) 00:15:45.326 fused_ordering(204) 00:15:45.326 fused_ordering(205) 00:15:45.584 fused_ordering(206) 00:15:45.584 fused_ordering(207) 00:15:45.584 fused_ordering(208) 00:15:45.584 fused_ordering(209) 00:15:45.584 fused_ordering(210) 00:15:45.584 fused_ordering(211) 00:15:45.584 fused_ordering(212) 00:15:45.584 fused_ordering(213) 00:15:45.584 fused_ordering(214) 00:15:45.584 fused_ordering(215) 00:15:45.584 fused_ordering(216) 00:15:45.584 fused_ordering(217) 00:15:45.584 fused_ordering(218) 00:15:45.584 fused_ordering(219) 00:15:45.584 fused_ordering(220) 00:15:45.584 fused_ordering(221) 00:15:45.584 fused_ordering(222) 00:15:45.584 fused_ordering(223) 00:15:45.584 fused_ordering(224) 00:15:45.584 fused_ordering(225) 00:15:45.584 fused_ordering(226) 00:15:45.585 fused_ordering(227) 00:15:45.585 fused_ordering(228) 00:15:45.585 fused_ordering(229) 00:15:45.585 fused_ordering(230) 00:15:45.585 fused_ordering(231) 00:15:45.585 fused_ordering(232) 00:15:45.585 fused_ordering(233) 00:15:45.585 fused_ordering(234) 00:15:45.585 fused_ordering(235) 00:15:45.585 fused_ordering(236) 00:15:45.585 fused_ordering(237) 00:15:45.585 fused_ordering(238) 00:15:45.585 fused_ordering(239) 00:15:45.585 fused_ordering(240) 00:15:45.585 fused_ordering(241) 00:15:45.585 fused_ordering(242) 00:15:45.585 fused_ordering(243) 00:15:45.585 fused_ordering(244) 00:15:45.585 fused_ordering(245) 00:15:45.585 fused_ordering(246) 00:15:45.585 fused_ordering(247) 00:15:45.585 fused_ordering(248) 00:15:45.585 fused_ordering(249) 00:15:45.585 fused_ordering(250) 00:15:45.585 fused_ordering(251) 00:15:45.585 fused_ordering(252) 00:15:45.585 fused_ordering(253) 00:15:45.585 fused_ordering(254) 00:15:45.585 fused_ordering(255) 00:15:45.585 fused_ordering(256) 00:15:45.585 fused_ordering(257) 00:15:45.585 fused_ordering(258) 00:15:45.585 fused_ordering(259) 00:15:45.585 fused_ordering(260) 00:15:45.585 fused_ordering(261) 00:15:45.585 fused_ordering(262) 00:15:45.585 fused_ordering(263) 00:15:45.585 fused_ordering(264) 00:15:45.585 fused_ordering(265) 00:15:45.585 fused_ordering(266) 00:15:45.585 fused_ordering(267) 00:15:45.585 fused_ordering(268) 00:15:45.585 fused_ordering(269) 00:15:45.585 fused_ordering(270) 00:15:45.585 fused_ordering(271) 00:15:45.585 fused_ordering(272) 00:15:45.585 fused_ordering(273) 00:15:45.585 fused_ordering(274) 00:15:45.585 fused_ordering(275) 00:15:45.585 fused_ordering(276) 00:15:45.585 fused_ordering(277) 00:15:45.585 fused_ordering(278) 00:15:45.585 fused_ordering(279) 00:15:45.585 fused_ordering(280) 00:15:45.585 fused_ordering(281) 00:15:45.585 fused_ordering(282) 00:15:45.585 fused_ordering(283) 00:15:45.585 fused_ordering(284) 00:15:45.585 fused_ordering(285) 00:15:45.585 fused_ordering(286) 00:15:45.585 fused_ordering(287) 00:15:45.585 fused_ordering(288) 00:15:45.585 fused_ordering(289) 00:15:45.585 fused_ordering(290) 00:15:45.585 fused_ordering(291) 00:15:45.585 fused_ordering(292) 00:15:45.585 fused_ordering(293) 00:15:45.585 fused_ordering(294) 00:15:45.585 fused_ordering(295) 00:15:45.585 fused_ordering(296) 00:15:45.585 fused_ordering(297) 00:15:45.585 fused_ordering(298) 00:15:45.585 fused_ordering(299) 00:15:45.585 fused_ordering(300) 00:15:45.585 fused_ordering(301) 00:15:45.585 fused_ordering(302) 00:15:45.585 fused_ordering(303) 00:15:45.585 fused_ordering(304) 00:15:45.585 fused_ordering(305) 00:15:45.585 fused_ordering(306) 00:15:45.585 fused_ordering(307) 00:15:45.585 fused_ordering(308) 00:15:45.585 fused_ordering(309) 00:15:45.585 fused_ordering(310) 00:15:45.585 fused_ordering(311) 00:15:45.585 fused_ordering(312) 00:15:45.585 fused_ordering(313) 00:15:45.585 fused_ordering(314) 00:15:45.585 fused_ordering(315) 00:15:45.585 fused_ordering(316) 00:15:45.585 fused_ordering(317) 00:15:45.585 fused_ordering(318) 00:15:45.585 fused_ordering(319) 00:15:45.585 fused_ordering(320) 00:15:45.585 fused_ordering(321) 00:15:45.585 fused_ordering(322) 00:15:45.585 fused_ordering(323) 00:15:45.585 fused_ordering(324) 00:15:45.585 fused_ordering(325) 00:15:45.585 fused_ordering(326) 00:15:45.585 fused_ordering(327) 00:15:45.585 fused_ordering(328) 00:15:45.585 fused_ordering(329) 00:15:45.585 fused_ordering(330) 00:15:45.585 fused_ordering(331) 00:15:45.585 fused_ordering(332) 00:15:45.585 fused_ordering(333) 00:15:45.585 fused_ordering(334) 00:15:45.585 fused_ordering(335) 00:15:45.585 fused_ordering(336) 00:15:45.585 fused_ordering(337) 00:15:45.585 fused_ordering(338) 00:15:45.585 fused_ordering(339) 00:15:45.585 fused_ordering(340) 00:15:45.585 fused_ordering(341) 00:15:45.585 fused_ordering(342) 00:15:45.585 fused_ordering(343) 00:15:45.585 fused_ordering(344) 00:15:45.585 fused_ordering(345) 00:15:45.585 fused_ordering(346) 00:15:45.585 fused_ordering(347) 00:15:45.585 fused_ordering(348) 00:15:45.585 fused_ordering(349) 00:15:45.585 fused_ordering(350) 00:15:45.585 fused_ordering(351) 00:15:45.585 fused_ordering(352) 00:15:45.585 fused_ordering(353) 00:15:45.585 fused_ordering(354) 00:15:45.585 fused_ordering(355) 00:15:45.585 fused_ordering(356) 00:15:45.585 fused_ordering(357) 00:15:45.585 fused_ordering(358) 00:15:45.585 fused_ordering(359) 00:15:45.585 fused_ordering(360) 00:15:45.585 fused_ordering(361) 00:15:45.585 fused_ordering(362) 00:15:45.585 fused_ordering(363) 00:15:45.585 fused_ordering(364) 00:15:45.585 fused_ordering(365) 00:15:45.585 fused_ordering(366) 00:15:45.585 fused_ordering(367) 00:15:45.585 fused_ordering(368) 00:15:45.585 fused_ordering(369) 00:15:45.585 fused_ordering(370) 00:15:45.585 fused_ordering(371) 00:15:45.585 fused_ordering(372) 00:15:45.585 fused_ordering(373) 00:15:45.585 fused_ordering(374) 00:15:45.585 fused_ordering(375) 00:15:45.585 fused_ordering(376) 00:15:45.585 fused_ordering(377) 00:15:45.585 fused_ordering(378) 00:15:45.585 fused_ordering(379) 00:15:45.585 fused_ordering(380) 00:15:45.585 fused_ordering(381) 00:15:45.585 fused_ordering(382) 00:15:45.585 fused_ordering(383) 00:15:45.585 fused_ordering(384) 00:15:45.585 fused_ordering(385) 00:15:45.585 fused_ordering(386) 00:15:45.585 fused_ordering(387) 00:15:45.585 fused_ordering(388) 00:15:45.585 fused_ordering(389) 00:15:45.585 fused_ordering(390) 00:15:45.585 fused_ordering(391) 00:15:45.585 fused_ordering(392) 00:15:45.585 fused_ordering(393) 00:15:45.585 fused_ordering(394) 00:15:45.585 fused_ordering(395) 00:15:45.585 fused_ordering(396) 00:15:45.585 fused_ordering(397) 00:15:45.585 fused_ordering(398) 00:15:45.585 fused_ordering(399) 00:15:45.585 fused_ordering(400) 00:15:45.585 fused_ordering(401) 00:15:45.585 fused_ordering(402) 00:15:45.585 fused_ordering(403) 00:15:45.585 fused_ordering(404) 00:15:45.585 fused_ordering(405) 00:15:45.585 fused_ordering(406) 00:15:45.585 fused_ordering(407) 00:15:45.585 fused_ordering(408) 00:15:45.585 fused_ordering(409) 00:15:45.585 fused_ordering(410) 00:15:45.844 fused_ordering(411) 00:15:45.844 fused_ordering(412) 00:15:45.844 fused_ordering(413) 00:15:45.844 fused_ordering(414) 00:15:45.844 fused_ordering(415) 00:15:45.844 fused_ordering(416) 00:15:45.844 fused_ordering(417) 00:15:45.844 fused_ordering(418) 00:15:45.844 fused_ordering(419) 00:15:45.844 fused_ordering(420) 00:15:45.844 fused_ordering(421) 00:15:45.844 fused_ordering(422) 00:15:45.844 fused_ordering(423) 00:15:45.844 fused_ordering(424) 00:15:45.844 fused_ordering(425) 00:15:45.844 fused_ordering(426) 00:15:45.844 fused_ordering(427) 00:15:45.844 fused_ordering(428) 00:15:45.844 fused_ordering(429) 00:15:45.844 fused_ordering(430) 00:15:45.844 fused_ordering(431) 00:15:45.844 fused_ordering(432) 00:15:45.844 fused_ordering(433) 00:15:45.844 fused_ordering(434) 00:15:45.844 fused_ordering(435) 00:15:45.844 fused_ordering(436) 00:15:45.844 fused_ordering(437) 00:15:45.844 fused_ordering(438) 00:15:45.844 fused_ordering(439) 00:15:45.844 fused_ordering(440) 00:15:45.844 fused_ordering(441) 00:15:45.844 fused_ordering(442) 00:15:45.844 fused_ordering(443) 00:15:45.844 fused_ordering(444) 00:15:45.844 fused_ordering(445) 00:15:45.844 fused_ordering(446) 00:15:45.844 fused_ordering(447) 00:15:45.844 fused_ordering(448) 00:15:45.844 fused_ordering(449) 00:15:45.844 fused_ordering(450) 00:15:45.844 fused_ordering(451) 00:15:45.844 fused_ordering(452) 00:15:45.844 fused_ordering(453) 00:15:45.844 fused_ordering(454) 00:15:45.844 fused_ordering(455) 00:15:45.844 fused_ordering(456) 00:15:45.844 fused_ordering(457) 00:15:45.844 fused_ordering(458) 00:15:45.844 fused_ordering(459) 00:15:45.844 fused_ordering(460) 00:15:45.844 fused_ordering(461) 00:15:45.844 fused_ordering(462) 00:15:45.844 fused_ordering(463) 00:15:45.844 fused_ordering(464) 00:15:45.844 fused_ordering(465) 00:15:45.844 fused_ordering(466) 00:15:45.844 fused_ordering(467) 00:15:45.844 fused_ordering(468) 00:15:45.844 fused_ordering(469) 00:15:45.844 fused_ordering(470) 00:15:45.844 fused_ordering(471) 00:15:45.844 fused_ordering(472) 00:15:45.844 fused_ordering(473) 00:15:45.844 fused_ordering(474) 00:15:45.844 fused_ordering(475) 00:15:45.844 fused_ordering(476) 00:15:45.844 fused_ordering(477) 00:15:45.844 fused_ordering(478) 00:15:45.844 fused_ordering(479) 00:15:45.844 fused_ordering(480) 00:15:45.844 fused_ordering(481) 00:15:45.844 fused_ordering(482) 00:15:45.844 fused_ordering(483) 00:15:45.844 fused_ordering(484) 00:15:45.844 fused_ordering(485) 00:15:45.844 fused_ordering(486) 00:15:45.844 fused_ordering(487) 00:15:45.844 fused_ordering(488) 00:15:45.844 fused_ordering(489) 00:15:45.844 fused_ordering(490) 00:15:45.844 fused_ordering(491) 00:15:45.844 fused_ordering(492) 00:15:45.844 fused_ordering(493) 00:15:45.844 fused_ordering(494) 00:15:45.844 fused_ordering(495) 00:15:45.844 fused_ordering(496) 00:15:45.844 fused_ordering(497) 00:15:45.844 fused_ordering(498) 00:15:45.844 fused_ordering(499) 00:15:45.844 fused_ordering(500) 00:15:45.844 fused_ordering(501) 00:15:45.844 fused_ordering(502) 00:15:45.844 fused_ordering(503) 00:15:45.844 fused_ordering(504) 00:15:45.844 fused_ordering(505) 00:15:45.844 fused_ordering(506) 00:15:45.844 fused_ordering(507) 00:15:45.844 fused_ordering(508) 00:15:45.844 fused_ordering(509) 00:15:45.844 fused_ordering(510) 00:15:45.844 fused_ordering(511) 00:15:45.844 fused_ordering(512) 00:15:45.844 fused_ordering(513) 00:15:45.844 fused_ordering(514) 00:15:45.844 fused_ordering(515) 00:15:45.844 fused_ordering(516) 00:15:45.844 fused_ordering(517) 00:15:45.844 fused_ordering(518) 00:15:45.844 fused_ordering(519) 00:15:45.844 fused_ordering(520) 00:15:45.844 fused_ordering(521) 00:15:45.844 fused_ordering(522) 00:15:45.844 fused_ordering(523) 00:15:45.844 fused_ordering(524) 00:15:45.844 fused_ordering(525) 00:15:45.844 fused_ordering(526) 00:15:45.844 fused_ordering(527) 00:15:45.844 fused_ordering(528) 00:15:45.844 fused_ordering(529) 00:15:45.844 fused_ordering(530) 00:15:45.844 fused_ordering(531) 00:15:45.844 fused_ordering(532) 00:15:45.844 fused_ordering(533) 00:15:45.844 fused_ordering(534) 00:15:45.844 fused_ordering(535) 00:15:45.844 fused_ordering(536) 00:15:45.844 fused_ordering(537) 00:15:45.844 fused_ordering(538) 00:15:45.844 fused_ordering(539) 00:15:45.844 fused_ordering(540) 00:15:45.844 fused_ordering(541) 00:15:45.844 fused_ordering(542) 00:15:45.844 fused_ordering(543) 00:15:45.844 fused_ordering(544) 00:15:45.844 fused_ordering(545) 00:15:45.844 fused_ordering(546) 00:15:45.844 fused_ordering(547) 00:15:45.844 fused_ordering(548) 00:15:45.844 fused_ordering(549) 00:15:45.844 fused_ordering(550) 00:15:45.844 fused_ordering(551) 00:15:45.844 fused_ordering(552) 00:15:45.844 fused_ordering(553) 00:15:45.844 fused_ordering(554) 00:15:45.844 fused_ordering(555) 00:15:45.844 fused_ordering(556) 00:15:45.844 fused_ordering(557) 00:15:45.844 fused_ordering(558) 00:15:45.844 fused_ordering(559) 00:15:45.844 fused_ordering(560) 00:15:45.844 fused_ordering(561) 00:15:45.844 fused_ordering(562) 00:15:45.844 fused_ordering(563) 00:15:45.844 fused_ordering(564) 00:15:45.844 fused_ordering(565) 00:15:45.844 fused_ordering(566) 00:15:45.844 fused_ordering(567) 00:15:45.844 fused_ordering(568) 00:15:45.844 fused_ordering(569) 00:15:45.844 fused_ordering(570) 00:15:45.844 fused_ordering(571) 00:15:45.844 fused_ordering(572) 00:15:45.844 fused_ordering(573) 00:15:45.844 fused_ordering(574) 00:15:45.844 fused_ordering(575) 00:15:45.844 fused_ordering(576) 00:15:45.844 fused_ordering(577) 00:15:45.844 fused_ordering(578) 00:15:45.844 fused_ordering(579) 00:15:45.844 fused_ordering(580) 00:15:45.844 fused_ordering(581) 00:15:45.844 fused_ordering(582) 00:15:45.844 fused_ordering(583) 00:15:45.844 fused_ordering(584) 00:15:45.844 fused_ordering(585) 00:15:45.844 fused_ordering(586) 00:15:45.844 fused_ordering(587) 00:15:45.844 fused_ordering(588) 00:15:45.844 fused_ordering(589) 00:15:45.844 fused_ordering(590) 00:15:45.844 fused_ordering(591) 00:15:45.844 fused_ordering(592) 00:15:45.844 fused_ordering(593) 00:15:45.844 fused_ordering(594) 00:15:45.844 fused_ordering(595) 00:15:45.844 fused_ordering(596) 00:15:45.844 fused_ordering(597) 00:15:45.844 fused_ordering(598) 00:15:45.844 fused_ordering(599) 00:15:45.844 fused_ordering(600) 00:15:45.844 fused_ordering(601) 00:15:45.844 fused_ordering(602) 00:15:45.844 fused_ordering(603) 00:15:45.844 fused_ordering(604) 00:15:45.844 fused_ordering(605) 00:15:45.844 fused_ordering(606) 00:15:45.844 fused_ordering(607) 00:15:45.844 fused_ordering(608) 00:15:45.844 fused_ordering(609) 00:15:45.844 fused_ordering(610) 00:15:45.844 fused_ordering(611) 00:15:45.844 fused_ordering(612) 00:15:45.844 fused_ordering(613) 00:15:45.844 fused_ordering(614) 00:15:45.844 fused_ordering(615) 00:15:46.411 fused_ordering(616) 00:15:46.411 fused_ordering(617) 00:15:46.411 fused_ordering(618) 00:15:46.411 fused_ordering(619) 00:15:46.411 fused_ordering(620) 00:15:46.411 fused_ordering(621) 00:15:46.411 fused_ordering(622) 00:15:46.411 fused_ordering(623) 00:15:46.411 fused_ordering(624) 00:15:46.411 fused_ordering(625) 00:15:46.411 fused_ordering(626) 00:15:46.411 fused_ordering(627) 00:15:46.411 fused_ordering(628) 00:15:46.411 fused_ordering(629) 00:15:46.411 fused_ordering(630) 00:15:46.411 fused_ordering(631) 00:15:46.411 fused_ordering(632) 00:15:46.411 fused_ordering(633) 00:15:46.412 fused_ordering(634) 00:15:46.412 fused_ordering(635) 00:15:46.412 fused_ordering(636) 00:15:46.412 fused_ordering(637) 00:15:46.412 fused_ordering(638) 00:15:46.412 fused_ordering(639) 00:15:46.412 fused_ordering(640) 00:15:46.412 fused_ordering(641) 00:15:46.412 fused_ordering(642) 00:15:46.412 fused_ordering(643) 00:15:46.412 fused_ordering(644) 00:15:46.412 fused_ordering(645) 00:15:46.412 fused_ordering(646) 00:15:46.412 fused_ordering(647) 00:15:46.412 fused_ordering(648) 00:15:46.412 fused_ordering(649) 00:15:46.412 fused_ordering(650) 00:15:46.412 fused_ordering(651) 00:15:46.412 fused_ordering(652) 00:15:46.412 fused_ordering(653) 00:15:46.412 fused_ordering(654) 00:15:46.412 fused_ordering(655) 00:15:46.412 fused_ordering(656) 00:15:46.412 fused_ordering(657) 00:15:46.412 fused_ordering(658) 00:15:46.412 fused_ordering(659) 00:15:46.412 fused_ordering(660) 00:15:46.412 fused_ordering(661) 00:15:46.412 fused_ordering(662) 00:15:46.412 fused_ordering(663) 00:15:46.412 fused_ordering(664) 00:15:46.412 fused_ordering(665) 00:15:46.412 fused_ordering(666) 00:15:46.412 fused_ordering(667) 00:15:46.412 fused_ordering(668) 00:15:46.412 fused_ordering(669) 00:15:46.412 fused_ordering(670) 00:15:46.412 fused_ordering(671) 00:15:46.412 fused_ordering(672) 00:15:46.412 fused_ordering(673) 00:15:46.412 fused_ordering(674) 00:15:46.412 fused_ordering(675) 00:15:46.412 fused_ordering(676) 00:15:46.412 fused_ordering(677) 00:15:46.412 fused_ordering(678) 00:15:46.412 fused_ordering(679) 00:15:46.412 fused_ordering(680) 00:15:46.412 fused_ordering(681) 00:15:46.412 fused_ordering(682) 00:15:46.412 fused_ordering(683) 00:15:46.412 fused_ordering(684) 00:15:46.412 fused_ordering(685) 00:15:46.412 fused_ordering(686) 00:15:46.412 fused_ordering(687) 00:15:46.412 fused_ordering(688) 00:15:46.412 fused_ordering(689) 00:15:46.412 fused_ordering(690) 00:15:46.412 fused_ordering(691) 00:15:46.412 fused_ordering(692) 00:15:46.412 fused_ordering(693) 00:15:46.412 fused_ordering(694) 00:15:46.412 fused_ordering(695) 00:15:46.412 fused_ordering(696) 00:15:46.412 fused_ordering(697) 00:15:46.412 fused_ordering(698) 00:15:46.412 fused_ordering(699) 00:15:46.412 fused_ordering(700) 00:15:46.412 fused_ordering(701) 00:15:46.412 fused_ordering(702) 00:15:46.412 fused_ordering(703) 00:15:46.412 fused_ordering(704) 00:15:46.412 fused_ordering(705) 00:15:46.412 fused_ordering(706) 00:15:46.412 fused_ordering(707) 00:15:46.412 fused_ordering(708) 00:15:46.412 fused_ordering(709) 00:15:46.412 fused_ordering(710) 00:15:46.412 fused_ordering(711) 00:15:46.412 fused_ordering(712) 00:15:46.412 fused_ordering(713) 00:15:46.412 fused_ordering(714) 00:15:46.412 fused_ordering(715) 00:15:46.412 fused_ordering(716) 00:15:46.412 fused_ordering(717) 00:15:46.412 fused_ordering(718) 00:15:46.412 fused_ordering(719) 00:15:46.412 fused_ordering(720) 00:15:46.412 fused_ordering(721) 00:15:46.412 fused_ordering(722) 00:15:46.412 fused_ordering(723) 00:15:46.412 fused_ordering(724) 00:15:46.412 fused_ordering(725) 00:15:46.412 fused_ordering(726) 00:15:46.412 fused_ordering(727) 00:15:46.412 fused_ordering(728) 00:15:46.412 fused_ordering(729) 00:15:46.412 fused_ordering(730) 00:15:46.412 fused_ordering(731) 00:15:46.412 fused_ordering(732) 00:15:46.412 fused_ordering(733) 00:15:46.412 fused_ordering(734) 00:15:46.412 fused_ordering(735) 00:15:46.412 fused_ordering(736) 00:15:46.412 fused_ordering(737) 00:15:46.412 fused_ordering(738) 00:15:46.412 fused_ordering(739) 00:15:46.412 fused_ordering(740) 00:15:46.412 fused_ordering(741) 00:15:46.412 fused_ordering(742) 00:15:46.412 fused_ordering(743) 00:15:46.412 fused_ordering(744) 00:15:46.412 fused_ordering(745) 00:15:46.412 fused_ordering(746) 00:15:46.412 fused_ordering(747) 00:15:46.412 fused_ordering(748) 00:15:46.412 fused_ordering(749) 00:15:46.412 fused_ordering(750) 00:15:46.412 fused_ordering(751) 00:15:46.412 fused_ordering(752) 00:15:46.412 fused_ordering(753) 00:15:46.412 fused_ordering(754) 00:15:46.412 fused_ordering(755) 00:15:46.412 fused_ordering(756) 00:15:46.412 fused_ordering(757) 00:15:46.412 fused_ordering(758) 00:15:46.412 fused_ordering(759) 00:15:46.412 fused_ordering(760) 00:15:46.412 fused_ordering(761) 00:15:46.412 fused_ordering(762) 00:15:46.412 fused_ordering(763) 00:15:46.412 fused_ordering(764) 00:15:46.412 fused_ordering(765) 00:15:46.412 fused_ordering(766) 00:15:46.412 fused_ordering(767) 00:15:46.412 fused_ordering(768) 00:15:46.412 fused_ordering(769) 00:15:46.412 fused_ordering(770) 00:15:46.412 fused_ordering(771) 00:15:46.412 fused_ordering(772) 00:15:46.412 fused_ordering(773) 00:15:46.412 fused_ordering(774) 00:15:46.412 fused_ordering(775) 00:15:46.412 fused_ordering(776) 00:15:46.412 fused_ordering(777) 00:15:46.412 fused_ordering(778) 00:15:46.412 fused_ordering(779) 00:15:46.412 fused_ordering(780) 00:15:46.412 fused_ordering(781) 00:15:46.412 fused_ordering(782) 00:15:46.412 fused_ordering(783) 00:15:46.412 fused_ordering(784) 00:15:46.412 fused_ordering(785) 00:15:46.412 fused_ordering(786) 00:15:46.412 fused_ordering(787) 00:15:46.412 fused_ordering(788) 00:15:46.412 fused_ordering(789) 00:15:46.412 fused_ordering(790) 00:15:46.412 fused_ordering(791) 00:15:46.412 fused_ordering(792) 00:15:46.412 fused_ordering(793) 00:15:46.412 fused_ordering(794) 00:15:46.412 fused_ordering(795) 00:15:46.412 fused_ordering(796) 00:15:46.412 fused_ordering(797) 00:15:46.412 fused_ordering(798) 00:15:46.412 fused_ordering(799) 00:15:46.412 fused_ordering(800) 00:15:46.412 fused_ordering(801) 00:15:46.412 fused_ordering(802) 00:15:46.412 fused_ordering(803) 00:15:46.412 fused_ordering(804) 00:15:46.412 fused_ordering(805) 00:15:46.412 fused_ordering(806) 00:15:46.412 fused_ordering(807) 00:15:46.412 fused_ordering(808) 00:15:46.412 fused_ordering(809) 00:15:46.412 fused_ordering(810) 00:15:46.412 fused_ordering(811) 00:15:46.412 fused_ordering(812) 00:15:46.412 fused_ordering(813) 00:15:46.412 fused_ordering(814) 00:15:46.412 fused_ordering(815) 00:15:46.412 fused_ordering(816) 00:15:46.412 fused_ordering(817) 00:15:46.412 fused_ordering(818) 00:15:46.412 fused_ordering(819) 00:15:46.412 fused_ordering(820) 00:15:46.980 fused_o[2024-07-11 13:45:49.267319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2352940 is same with the state(5) to be set 00:15:46.980 rdering(821) 00:15:46.980 fused_ordering(822) 00:15:46.980 fused_ordering(823) 00:15:46.980 fused_ordering(824) 00:15:46.980 fused_ordering(825) 00:15:46.980 fused_ordering(826) 00:15:46.980 fused_ordering(827) 00:15:46.980 fused_ordering(828) 00:15:46.980 fused_ordering(829) 00:15:46.980 fused_ordering(830) 00:15:46.980 fused_ordering(831) 00:15:46.980 fused_ordering(832) 00:15:46.980 fused_ordering(833) 00:15:46.980 fused_ordering(834) 00:15:46.980 fused_ordering(835) 00:15:46.980 fused_ordering(836) 00:15:46.980 fused_ordering(837) 00:15:46.980 fused_ordering(838) 00:15:46.980 fused_ordering(839) 00:15:46.980 fused_ordering(840) 00:15:46.980 fused_ordering(841) 00:15:46.980 fused_ordering(842) 00:15:46.980 fused_ordering(843) 00:15:46.980 fused_ordering(844) 00:15:46.980 fused_ordering(845) 00:15:46.980 fused_ordering(846) 00:15:46.980 fused_ordering(847) 00:15:46.980 fused_ordering(848) 00:15:46.980 fused_ordering(849) 00:15:46.980 fused_ordering(850) 00:15:46.980 fused_ordering(851) 00:15:46.980 fused_ordering(852) 00:15:46.980 fused_ordering(853) 00:15:46.980 fused_ordering(854) 00:15:46.980 fused_ordering(855) 00:15:46.980 fused_ordering(856) 00:15:46.980 fused_ordering(857) 00:15:46.980 fused_ordering(858) 00:15:46.980 fused_ordering(859) 00:15:46.980 fused_ordering(860) 00:15:46.980 fused_ordering(861) 00:15:46.980 fused_ordering(862) 00:15:46.980 fused_ordering(863) 00:15:46.980 fused_ordering(864) 00:15:46.980 fused_ordering(865) 00:15:46.980 fused_ordering(866) 00:15:46.980 fused_ordering(867) 00:15:46.980 fused_ordering(868) 00:15:46.980 fused_ordering(869) 00:15:46.980 fused_ordering(870) 00:15:46.980 fused_ordering(871) 00:15:46.980 fused_ordering(872) 00:15:46.980 fused_ordering(873) 00:15:46.980 fused_ordering(874) 00:15:46.980 fused_ordering(875) 00:15:46.980 fused_ordering(876) 00:15:46.980 fused_ordering(877) 00:15:46.980 fused_ordering(878) 00:15:46.980 fused_ordering(879) 00:15:46.980 fused_ordering(880) 00:15:46.980 fused_ordering(881) 00:15:46.980 fused_ordering(882) 00:15:46.980 fused_ordering(883) 00:15:46.980 fused_ordering(884) 00:15:46.980 fused_ordering(885) 00:15:46.980 fused_ordering(886) 00:15:46.980 fused_ordering(887) 00:15:46.980 fused_ordering(888) 00:15:46.980 fused_ordering(889) 00:15:46.980 fused_ordering(890) 00:15:46.980 fused_ordering(891) 00:15:46.980 fused_ordering(892) 00:15:46.980 fused_ordering(893) 00:15:46.980 fused_ordering(894) 00:15:46.980 fused_ordering(895) 00:15:46.980 fused_ordering(896) 00:15:46.980 fused_ordering(897) 00:15:46.980 fused_ordering(898) 00:15:46.980 fused_ordering(899) 00:15:46.980 fused_ordering(900) 00:15:46.980 fused_ordering(901) 00:15:46.980 fused_ordering(902) 00:15:46.980 fused_ordering(903) 00:15:46.980 fused_ordering(904) 00:15:46.980 fused_ordering(905) 00:15:46.980 fused_ordering(906) 00:15:46.980 fused_ordering(907) 00:15:46.980 fused_ordering(908) 00:15:46.980 fused_ordering(909) 00:15:46.980 fused_ordering(910) 00:15:46.980 fused_ordering(911) 00:15:46.980 fused_ordering(912) 00:15:46.980 fused_ordering(913) 00:15:46.980 fused_ordering(914) 00:15:46.980 fused_ordering(915) 00:15:46.980 fused_ordering(916) 00:15:46.980 fused_ordering(917) 00:15:46.980 fused_ordering(918) 00:15:46.980 fused_ordering(919) 00:15:46.980 fused_ordering(920) 00:15:46.980 fused_ordering(921) 00:15:46.980 fused_ordering(922) 00:15:46.980 fused_ordering(923) 00:15:46.980 fused_ordering(924) 00:15:46.980 fused_ordering(925) 00:15:46.980 fused_ordering(926) 00:15:46.980 fused_ordering(927) 00:15:46.980 fused_ordering(928) 00:15:46.980 fused_ordering(929) 00:15:46.980 fused_ordering(930) 00:15:46.980 fused_ordering(931) 00:15:46.980 fused_ordering(932) 00:15:46.980 fused_ordering(933) 00:15:46.980 fused_ordering(934) 00:15:46.980 fused_ordering(935) 00:15:46.980 fused_ordering(936) 00:15:46.980 fused_ordering(937) 00:15:46.980 fused_ordering(938) 00:15:46.980 fused_ordering(939) 00:15:46.980 fused_ordering(940) 00:15:46.980 fused_ordering(941) 00:15:46.980 fused_ordering(942) 00:15:46.980 fused_ordering(943) 00:15:46.980 fused_ordering(944) 00:15:46.980 fused_ordering(945) 00:15:46.980 fused_ordering(946) 00:15:46.980 fused_ordering(947) 00:15:46.980 fused_ordering(948) 00:15:46.980 fused_ordering(949) 00:15:46.980 fused_ordering(950) 00:15:46.980 fused_ordering(951) 00:15:46.980 fused_ordering(952) 00:15:46.980 fused_ordering(953) 00:15:46.980 fused_ordering(954) 00:15:46.980 fused_ordering(955) 00:15:46.980 fused_ordering(956) 00:15:46.980 fused_ordering(957) 00:15:46.980 fused_ordering(958) 00:15:46.980 fused_ordering(959) 00:15:46.980 fused_ordering(960) 00:15:46.980 fused_ordering(961) 00:15:46.980 fused_ordering(962) 00:15:46.980 fused_ordering(963) 00:15:46.980 fused_ordering(964) 00:15:46.980 fused_ordering(965) 00:15:46.980 fused_ordering(966) 00:15:46.980 fused_ordering(967) 00:15:46.980 fused_ordering(968) 00:15:46.980 fused_ordering(969) 00:15:46.980 fused_ordering(970) 00:15:46.980 fused_ordering(971) 00:15:46.980 fused_ordering(972) 00:15:46.980 fused_ordering(973) 00:15:46.980 fused_ordering(974) 00:15:46.980 fused_ordering(975) 00:15:46.980 fused_ordering(976) 00:15:46.980 fused_ordering(977) 00:15:46.980 fused_ordering(978) 00:15:46.980 fused_ordering(979) 00:15:46.980 fused_ordering(980) 00:15:46.980 fused_ordering(981) 00:15:46.980 fused_ordering(982) 00:15:46.980 fused_ordering(983) 00:15:46.980 fused_ordering(984) 00:15:46.980 fused_ordering(985) 00:15:46.981 fused_ordering(986) 00:15:46.981 fused_ordering(987) 00:15:46.981 fused_ordering(988) 00:15:46.981 fused_ordering(989) 00:15:46.981 fused_ordering(990) 00:15:46.981 fused_ordering(991) 00:15:46.981 fused_ordering(992) 00:15:46.981 fused_ordering(993) 00:15:46.981 fused_ordering(994) 00:15:46.981 fused_ordering(995) 00:15:46.981 fused_ordering(996) 00:15:46.981 fused_ordering(997) 00:15:46.981 fused_ordering(998) 00:15:46.981 fused_ordering(999) 00:15:46.981 fused_ordering(1000) 00:15:46.981 fused_ordering(1001) 00:15:46.981 fused_ordering(1002) 00:15:46.981 fused_ordering(1003) 00:15:46.981 fused_ordering(1004) 00:15:46.981 fused_ordering(1005) 00:15:46.981 fused_ordering(1006) 00:15:46.981 fused_ordering(1007) 00:15:46.981 fused_ordering(1008) 00:15:46.981 fused_ordering(1009) 00:15:46.981 fused_ordering(1010) 00:15:46.981 fused_ordering(1011) 00:15:46.981 fused_ordering(1012) 00:15:46.981 fused_ordering(1013) 00:15:46.981 fused_ordering(1014) 00:15:46.981 fused_ordering(1015) 00:15:46.981 fused_ordering(1016) 00:15:46.981 fused_ordering(1017) 00:15:46.981 fused_ordering(1018) 00:15:46.981 fused_ordering(1019) 00:15:46.981 fused_ordering(1020) 00:15:46.981 fused_ordering(1021) 00:15:46.981 fused_ordering(1022) 00:15:46.981 fused_ordering(1023) 00:15:46.981 13:45:49 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:46.981 13:45:49 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:46.981 13:45:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:46.981 13:45:49 -- nvmf/common.sh@116 -- # sync 00:15:46.981 13:45:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:46.981 13:45:49 -- nvmf/common.sh@119 -- # set +e 00:15:46.981 13:45:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:46.981 13:45:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:46.981 rmmod nvme_tcp 00:15:46.981 rmmod nvme_fabrics 00:15:46.981 rmmod nvme_keyring 00:15:46.981 13:45:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:46.981 13:45:49 -- nvmf/common.sh@123 -- # set -e 00:15:46.981 13:45:49 -- nvmf/common.sh@124 -- # return 0 00:15:46.981 13:45:49 -- nvmf/common.sh@477 -- # '[' -n 1545800 ']' 00:15:46.981 13:45:49 -- nvmf/common.sh@478 -- # killprocess 1545800 00:15:46.981 13:45:49 -- common/autotest_common.sh@926 -- # '[' -z 1545800 ']' 00:15:46.981 13:45:49 -- common/autotest_common.sh@930 -- # kill -0 1545800 00:15:46.981 13:45:49 -- common/autotest_common.sh@931 -- # uname 00:15:46.981 13:45:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:46.981 13:45:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1545800 00:15:46.981 13:45:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:46.981 13:45:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:46.981 13:45:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1545800' 00:15:46.981 killing process with pid 1545800 00:15:46.981 13:45:49 -- common/autotest_common.sh@945 -- # kill 1545800 00:15:46.981 13:45:49 -- common/autotest_common.sh@950 -- # wait 1545800 00:15:47.240 13:45:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:47.240 13:45:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:47.240 13:45:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:47.240 13:45:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.240 13:45:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:47.240 13:45:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.240 13:45:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.240 13:45:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.211 13:45:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:49.211 00:15:49.211 real 0m10.751s 00:15:49.211 user 0m5.470s 00:15:49.211 sys 0m5.638s 00:15:49.211 13:45:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.211 13:45:51 -- common/autotest_common.sh@10 -- # set +x 00:15:49.211 ************************************ 00:15:49.211 END TEST nvmf_fused_ordering 00:15:49.211 ************************************ 00:15:49.211 13:45:51 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:49.211 13:45:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:49.211 13:45:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:49.211 13:45:51 -- common/autotest_common.sh@10 -- # set +x 00:15:49.211 ************************************ 00:15:49.211 START TEST nvmf_delete_subsystem 00:15:49.211 ************************************ 00:15:49.211 13:45:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:49.470 * Looking for test storage... 00:15:49.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.470 13:45:51 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.470 13:45:51 -- nvmf/common.sh@7 -- # uname -s 00:15:49.470 13:45:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.470 13:45:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.470 13:45:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.470 13:45:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.470 13:45:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.470 13:45:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.470 13:45:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.470 13:45:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.470 13:45:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.470 13:45:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.470 13:45:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.470 13:45:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.470 13:45:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.470 13:45:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.470 13:45:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.470 13:45:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.470 13:45:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.470 13:45:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.470 13:45:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.470 13:45:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.470 13:45:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.470 13:45:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.470 13:45:51 -- paths/export.sh@5 -- # export PATH 00:15:49.470 13:45:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.470 13:45:51 -- nvmf/common.sh@46 -- # : 0 00:15:49.470 13:45:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:49.470 13:45:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:49.470 13:45:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:49.470 13:45:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.470 13:45:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.470 13:45:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:49.470 13:45:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:49.470 13:45:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:49.470 13:45:51 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:49.470 13:45:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:49.470 13:45:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.470 13:45:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:49.470 13:45:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:49.470 13:45:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:49.470 13:45:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.470 13:45:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.470 13:45:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.470 13:45:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:49.470 13:45:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:49.470 13:45:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:49.470 13:45:51 -- common/autotest_common.sh@10 -- # set +x 00:15:54.733 13:45:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:54.733 13:45:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:54.733 13:45:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:54.733 13:45:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:54.733 13:45:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:54.733 13:45:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:54.733 13:45:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:54.733 13:45:56 -- nvmf/common.sh@294 -- # net_devs=() 00:15:54.733 13:45:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:54.733 13:45:56 -- nvmf/common.sh@295 -- # e810=() 00:15:54.733 13:45:56 -- nvmf/common.sh@295 -- # local -ga e810 00:15:54.733 13:45:56 -- nvmf/common.sh@296 -- # x722=() 00:15:54.733 13:45:56 -- nvmf/common.sh@296 -- # local -ga x722 00:15:54.733 13:45:56 -- nvmf/common.sh@297 -- # mlx=() 00:15:54.733 13:45:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:54.733 13:45:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:54.733 13:45:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:54.733 13:45:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:54.733 13:45:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:54.733 13:45:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:54.733 13:45:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:54.733 13:45:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:54.733 13:45:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:54.733 13:45:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:54.733 13:45:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:54.733 13:45:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:54.733 13:45:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:54.733 13:45:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:54.733 13:45:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:54.733 13:45:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:54.733 13:45:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:54.733 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:54.733 13:45:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:54.733 13:45:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:54.733 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:54.733 13:45:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:54.733 13:45:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:54.733 13:45:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.733 13:45:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:54.733 13:45:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.733 13:45:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:54.733 Found net devices under 0000:86:00.0: cvl_0_0 00:15:54.733 13:45:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.733 13:45:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:54.733 13:45:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.733 13:45:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:54.733 13:45:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.733 13:45:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:54.733 Found net devices under 0000:86:00.1: cvl_0_1 00:15:54.733 13:45:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.733 13:45:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:54.733 13:45:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:54.733 13:45:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:54.733 13:45:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.733 13:45:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.733 13:45:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:54.733 13:45:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:54.733 13:45:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:54.733 13:45:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:54.733 13:45:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:54.733 13:45:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:54.733 13:45:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.733 13:45:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:54.733 13:45:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:54.733 13:45:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:54.733 13:45:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:54.733 13:45:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:54.733 13:45:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:54.733 13:45:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:54.733 13:45:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:54.733 13:45:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:54.733 13:45:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:54.733 13:45:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:54.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:15:54.733 00:15:54.733 --- 10.0.0.2 ping statistics --- 00:15:54.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.733 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:15:54.733 13:45:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:54.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:15:54.733 00:15:54.733 --- 10.0.0.1 ping statistics --- 00:15:54.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.733 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:15:54.733 13:45:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.733 13:45:56 -- nvmf/common.sh@410 -- # return 0 00:15:54.733 13:45:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:54.733 13:45:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.733 13:45:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:54.733 13:45:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.733 13:45:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:54.733 13:45:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:54.733 13:45:56 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:54.733 13:45:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:54.733 13:45:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:54.733 13:45:56 -- common/autotest_common.sh@10 -- # set +x 00:15:54.733 13:45:56 -- nvmf/common.sh@469 -- # nvmfpid=1549602 00:15:54.733 13:45:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:54.734 13:45:56 -- nvmf/common.sh@470 -- # waitforlisten 1549602 00:15:54.734 13:45:56 -- common/autotest_common.sh@819 -- # '[' -z 1549602 ']' 00:15:54.734 13:45:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.734 13:45:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:54.734 13:45:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.734 13:45:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:54.734 13:45:56 -- common/autotest_common.sh@10 -- # set +x 00:15:54.734 [2024-07-11 13:45:57.034791] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:54.734 [2024-07-11 13:45:57.034833] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.734 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.734 [2024-07-11 13:45:57.092997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:54.734 [2024-07-11 13:45:57.132085] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:54.734 [2024-07-11 13:45:57.132225] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.734 [2024-07-11 13:45:57.132235] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.734 [2024-07-11 13:45:57.132241] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.734 [2024-07-11 13:45:57.132383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.734 [2024-07-11 13:45:57.132386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.670 13:45:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:55.670 13:45:57 -- common/autotest_common.sh@852 -- # return 0 00:15:55.670 13:45:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:55.670 13:45:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:55.670 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:15:55.670 13:45:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.670 13:45:57 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.670 13:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:55.670 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:15:55.670 [2024-07-11 13:45:57.863921] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.670 13:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:55.670 13:45:57 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:55.670 13:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:55.670 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:15:55.670 13:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:55.670 13:45:57 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.670 13:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:55.670 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:15:55.670 [2024-07-11 13:45:57.880106] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.670 13:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:55.670 13:45:57 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:55.670 13:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:55.670 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:15:55.670 NULL1 00:15:55.670 13:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:55.670 13:45:57 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:55.670 13:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:55.670 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:15:55.670 Delay0 00:15:55.670 13:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:55.670 13:45:57 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:55.670 13:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:55.670 13:45:57 -- common/autotest_common.sh@10 -- # set +x 00:15:55.670 13:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:55.670 13:45:57 -- target/delete_subsystem.sh@28 -- # perf_pid=1549852 00:15:55.670 13:45:57 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:55.670 13:45:57 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:55.670 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.670 [2024-07-11 13:45:57.964713] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:57.572 13:45:59 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.572 13:45:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.572 13:45:59 -- common/autotest_common.sh@10 -- # set +x 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 starting I/O failed: -6 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 starting I/O failed: -6 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 starting I/O failed: -6 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 starting I/O failed: -6 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 starting I/O failed: -6 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 starting I/O failed: -6 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 starting I/O failed: -6 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 starting I/O failed: -6 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 starting I/O failed: -6 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 starting I/O failed: -6 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 starting I/O failed: -6 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 Read completed with error (sct=0, sc=8) 00:15:57.831 starting I/O failed: -6 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.831 Write completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 starting I/O failed: -6 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 starting I/O failed: -6 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 starting I/O failed: -6 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 starting I/O failed: -6 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 starting I/O failed: -6 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 starting I/O failed: -6 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 starting I/O failed: -6 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 starting I/O failed: -6 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 starting I/O failed: -6 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 starting I/O failed: -6 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 starting I/O failed: -6 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 starting I/O failed: -6 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 [2024-07-11 13:46:00.216153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99c0000c00 is same with the state(5) to be set 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Write completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:57.832 Read completed with error (sct=0, sc=8) 00:15:58.768 [2024-07-11 13:46:01.185695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2fda0 is same with the state(5) to be set 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Write completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Write completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Write completed with error (sct=0, sc=8) 00:15:58.768 Write completed with error (sct=0, sc=8) 00:15:58.768 Write completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Write completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 [2024-07-11 13:46:01.219545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99c000c1d0 is same with the state(5) to be set 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Write completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Write completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Write completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Write completed with error (sct=0, sc=8) 00:15:58.768 Read completed with error (sct=0, sc=8) 00:15:58.768 Write completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 [2024-07-11 13:46:01.219987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2db60 is same with the state(5) to be set 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 [2024-07-11 13:46:01.220142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2b680 is same with the state(5) to be set 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Read completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 Write completed with error (sct=0, sc=8) 00:15:58.769 [2024-07-11 13:46:01.220281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2d600 is same with the state(5) to be set 00:15:58.769 [2024-07-11 13:46:01.220937] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2fda0 (9): Bad file descriptor 00:15:58.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:58.769 13:46:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.769 13:46:01 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:58.769 13:46:01 -- target/delete_subsystem.sh@35 -- # kill -0 1549852 00:15:58.769 13:46:01 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:59.028 Initializing NVMe Controllers 00:15:59.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:59.028 Controller IO queue size 128, less than required. 00:15:59.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:59.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:59.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:59.028 Initialization complete. Launching workers. 00:15:59.028 ======================================================== 00:15:59.028 Latency(us) 00:15:59.028 Device Information : IOPS MiB/s Average min max 00:15:59.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.70 0.09 950187.09 423.08 1013221.70 00:15:59.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.93 0.08 869786.65 246.21 1012767.48 00:15:59.028 ======================================================== 00:15:59.028 Total : 346.63 0.17 913788.04 246.21 1013221.70 00:15:59.028 00:15:59.287 13:46:01 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:59.287 13:46:01 -- target/delete_subsystem.sh@35 -- # kill -0 1549852 00:15:59.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1549852) - No such process 00:15:59.287 13:46:01 -- target/delete_subsystem.sh@45 -- # NOT wait 1549852 00:15:59.287 13:46:01 -- common/autotest_common.sh@640 -- # local es=0 00:15:59.287 13:46:01 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 1549852 00:15:59.287 13:46:01 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:59.287 13:46:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:59.287 13:46:01 -- common/autotest_common.sh@632 -- # type -t wait 00:15:59.287 13:46:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:59.287 13:46:01 -- common/autotest_common.sh@643 -- # wait 1549852 00:15:59.287 13:46:01 -- common/autotest_common.sh@643 -- # es=1 00:15:59.287 13:46:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:59.287 13:46:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:59.287 13:46:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:59.287 13:46:01 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:59.287 13:46:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:59.287 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:15:59.545 13:46:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:59.545 13:46:01 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:59.545 13:46:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:59.545 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:15:59.545 [2024-07-11 13:46:01.747049] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.545 13:46:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:59.546 13:46:01 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:59.546 13:46:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:59.546 13:46:01 -- common/autotest_common.sh@10 -- # set +x 00:15:59.546 13:46:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:59.546 13:46:01 -- target/delete_subsystem.sh@54 -- # perf_pid=1550553 00:15:59.546 13:46:01 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:59.546 13:46:01 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:59.546 13:46:01 -- target/delete_subsystem.sh@57 -- # kill -0 1550553 00:15:59.546 13:46:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:59.546 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.546 [2024-07-11 13:46:01.806047] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:00.112 13:46:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:00.112 13:46:02 -- target/delete_subsystem.sh@57 -- # kill -0 1550553 00:16:00.112 13:46:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:00.370 13:46:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:00.370 13:46:02 -- target/delete_subsystem.sh@57 -- # kill -0 1550553 00:16:00.370 13:46:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:00.937 13:46:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:00.937 13:46:03 -- target/delete_subsystem.sh@57 -- # kill -0 1550553 00:16:00.937 13:46:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:01.504 13:46:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:01.504 13:46:03 -- target/delete_subsystem.sh@57 -- # kill -0 1550553 00:16:01.504 13:46:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:02.069 13:46:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:02.069 13:46:04 -- target/delete_subsystem.sh@57 -- # kill -0 1550553 00:16:02.069 13:46:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:02.636 13:46:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:02.636 13:46:04 -- target/delete_subsystem.sh@57 -- # kill -0 1550553 00:16:02.636 13:46:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:02.636 Initializing NVMe Controllers 00:16:02.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:02.636 Controller IO queue size 128, less than required. 00:16:02.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:02.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:02.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:02.636 Initialization complete. Launching workers. 00:16:02.636 ======================================================== 00:16:02.636 Latency(us) 00:16:02.636 Device Information : IOPS MiB/s Average min max 00:16:02.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003358.53 1000228.19 1010457.95 00:16:02.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005218.79 1000300.16 1010557.44 00:16:02.636 ======================================================== 00:16:02.636 Total : 256.00 0.12 1004288.66 1000228.19 1010557.44 00:16:02.636 00:16:02.894 13:46:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:02.894 13:46:05 -- target/delete_subsystem.sh@57 -- # kill -0 1550553 00:16:02.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1550553) - No such process 00:16:02.894 13:46:05 -- target/delete_subsystem.sh@67 -- # wait 1550553 00:16:02.894 13:46:05 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:02.894 13:46:05 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:02.894 13:46:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:02.894 13:46:05 -- nvmf/common.sh@116 -- # sync 00:16:02.894 13:46:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:02.894 13:46:05 -- nvmf/common.sh@119 -- # set +e 00:16:02.894 13:46:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:02.894 13:46:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:02.894 rmmod nvme_tcp 00:16:02.894 rmmod nvme_fabrics 00:16:02.894 rmmod nvme_keyring 00:16:02.894 13:46:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:02.894 13:46:05 -- nvmf/common.sh@123 -- # set -e 00:16:02.894 13:46:05 -- nvmf/common.sh@124 -- # return 0 00:16:03.152 13:46:05 -- nvmf/common.sh@477 -- # '[' -n 1549602 ']' 00:16:03.152 13:46:05 -- nvmf/common.sh@478 -- # killprocess 1549602 00:16:03.152 13:46:05 -- common/autotest_common.sh@926 -- # '[' -z 1549602 ']' 00:16:03.152 13:46:05 -- common/autotest_common.sh@930 -- # kill -0 1549602 00:16:03.152 13:46:05 -- common/autotest_common.sh@931 -- # uname 00:16:03.152 13:46:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:03.152 13:46:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1549602 00:16:03.152 13:46:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:03.152 13:46:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:03.152 13:46:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1549602' 00:16:03.152 killing process with pid 1549602 00:16:03.152 13:46:05 -- common/autotest_common.sh@945 -- # kill 1549602 00:16:03.152 13:46:05 -- common/autotest_common.sh@950 -- # wait 1549602 00:16:03.152 13:46:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:03.152 13:46:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:03.152 13:46:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:03.152 13:46:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.152 13:46:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:03.152 13:46:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.152 13:46:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.152 13:46:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.679 13:46:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:05.679 00:16:05.679 real 0m15.977s 00:16:05.679 user 0m30.549s 00:16:05.679 sys 0m4.700s 00:16:05.679 13:46:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.679 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:16:05.679 ************************************ 00:16:05.679 END TEST nvmf_delete_subsystem 00:16:05.679 ************************************ 00:16:05.679 13:46:07 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:16:05.679 13:46:07 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:05.679 13:46:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:05.679 13:46:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:05.679 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:16:05.679 ************************************ 00:16:05.679 START TEST nvmf_nvme_cli 00:16:05.679 ************************************ 00:16:05.679 13:46:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:05.679 * Looking for test storage... 00:16:05.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.679 13:46:07 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.679 13:46:07 -- nvmf/common.sh@7 -- # uname -s 00:16:05.679 13:46:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.679 13:46:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.679 13:46:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.679 13:46:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.679 13:46:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.679 13:46:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.679 13:46:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.679 13:46:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.679 13:46:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.679 13:46:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.679 13:46:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.679 13:46:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.679 13:46:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.679 13:46:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.679 13:46:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.679 13:46:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.679 13:46:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.679 13:46:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.679 13:46:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.679 13:46:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.679 13:46:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.679 13:46:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.679 13:46:07 -- paths/export.sh@5 -- # export PATH 00:16:05.679 13:46:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.679 13:46:07 -- nvmf/common.sh@46 -- # : 0 00:16:05.679 13:46:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:05.679 13:46:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:05.679 13:46:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:05.679 13:46:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.679 13:46:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.679 13:46:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:05.679 13:46:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:05.679 13:46:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:05.679 13:46:07 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.679 13:46:07 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.679 13:46:07 -- target/nvme_cli.sh@14 -- # devs=() 00:16:05.679 13:46:07 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:05.679 13:46:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:05.679 13:46:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.679 13:46:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:05.679 13:46:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:05.679 13:46:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:05.679 13:46:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.679 13:46:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.679 13:46:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.679 13:46:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:05.679 13:46:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:05.679 13:46:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:05.679 13:46:07 -- common/autotest_common.sh@10 -- # set +x 00:16:10.941 13:46:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:10.941 13:46:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:10.941 13:46:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:10.941 13:46:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:10.941 13:46:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:10.941 13:46:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:10.941 13:46:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:10.941 13:46:13 -- nvmf/common.sh@294 -- # net_devs=() 00:16:10.941 13:46:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:10.941 13:46:13 -- nvmf/common.sh@295 -- # e810=() 00:16:10.941 13:46:13 -- nvmf/common.sh@295 -- # local -ga e810 00:16:10.941 13:46:13 -- nvmf/common.sh@296 -- # x722=() 00:16:10.941 13:46:13 -- nvmf/common.sh@296 -- # local -ga x722 00:16:10.941 13:46:13 -- nvmf/common.sh@297 -- # mlx=() 00:16:10.941 13:46:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:10.941 13:46:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.941 13:46:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.941 13:46:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.941 13:46:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.941 13:46:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.941 13:46:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.941 13:46:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.941 13:46:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.941 13:46:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.941 13:46:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.941 13:46:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.941 13:46:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:10.941 13:46:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:10.941 13:46:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:10.941 13:46:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:10.941 13:46:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:10.942 13:46:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:10.942 13:46:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:10.942 13:46:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:10.942 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:10.942 13:46:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:10.942 13:46:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:10.942 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:10.942 13:46:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:10.942 13:46:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:10.942 13:46:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.942 13:46:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:10.942 13:46:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.942 13:46:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:10.942 Found net devices under 0000:86:00.0: cvl_0_0 00:16:10.942 13:46:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.942 13:46:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:10.942 13:46:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.942 13:46:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:10.942 13:46:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.942 13:46:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:10.942 Found net devices under 0000:86:00.1: cvl_0_1 00:16:10.942 13:46:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.942 13:46:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:10.942 13:46:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:10.942 13:46:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:10.942 13:46:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:10.942 13:46:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.942 13:46:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.942 13:46:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.942 13:46:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:10.942 13:46:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.942 13:46:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.942 13:46:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:10.942 13:46:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.942 13:46:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.942 13:46:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:10.942 13:46:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:10.942 13:46:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.942 13:46:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.942 13:46:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.942 13:46:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.942 13:46:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:10.942 13:46:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.942 13:46:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:11.200 13:46:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:11.200 13:46:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:11.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:16:11.200 00:16:11.200 --- 10.0.0.2 ping statistics --- 00:16:11.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.200 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:16:11.200 13:46:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:11.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:16:11.200 00:16:11.200 --- 10.0.0.1 ping statistics --- 00:16:11.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.200 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:11.200 13:46:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.200 13:46:13 -- nvmf/common.sh@410 -- # return 0 00:16:11.200 13:46:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:11.200 13:46:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.200 13:46:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:11.200 13:46:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:11.200 13:46:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.200 13:46:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:11.200 13:46:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:11.200 13:46:13 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:11.200 13:46:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:11.200 13:46:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:11.200 13:46:13 -- common/autotest_common.sh@10 -- # set +x 00:16:11.200 13:46:13 -- nvmf/common.sh@469 -- # nvmfpid=1554576 00:16:11.200 13:46:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:11.200 13:46:13 -- nvmf/common.sh@470 -- # waitforlisten 1554576 00:16:11.200 13:46:13 -- common/autotest_common.sh@819 -- # '[' -z 1554576 ']' 00:16:11.200 13:46:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.200 13:46:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:11.200 13:46:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.200 13:46:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:11.200 13:46:13 -- common/autotest_common.sh@10 -- # set +x 00:16:11.200 [2024-07-11 13:46:13.516446] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:11.200 [2024-07-11 13:46:13.516488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.200 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.200 [2024-07-11 13:46:13.573030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.200 [2024-07-11 13:46:13.613455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:11.200 [2024-07-11 13:46:13.613597] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.200 [2024-07-11 13:46:13.613606] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.200 [2024-07-11 13:46:13.613612] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.200 [2024-07-11 13:46:13.613651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.200 [2024-07-11 13:46:13.613752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.200 [2024-07-11 13:46:13.613835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.200 [2024-07-11 13:46:13.613836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.133 13:46:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:12.133 13:46:14 -- common/autotest_common.sh@852 -- # return 0 00:16:12.133 13:46:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:12.133 13:46:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:12.133 13:46:14 -- common/autotest_common.sh@10 -- # set +x 00:16:12.133 13:46:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.133 13:46:14 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:12.133 13:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.133 13:46:14 -- common/autotest_common.sh@10 -- # set +x 00:16:12.133 [2024-07-11 13:46:14.346573] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.133 13:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.133 13:46:14 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:12.133 13:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.133 13:46:14 -- common/autotest_common.sh@10 -- # set +x 00:16:12.133 Malloc0 00:16:12.133 13:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.133 13:46:14 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:12.133 13:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.133 13:46:14 -- common/autotest_common.sh@10 -- # set +x 00:16:12.133 Malloc1 00:16:12.133 13:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.133 13:46:14 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:12.133 13:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.133 13:46:14 -- common/autotest_common.sh@10 -- # set +x 00:16:12.133 13:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.133 13:46:14 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:12.133 13:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.133 13:46:14 -- common/autotest_common.sh@10 -- # set +x 00:16:12.133 13:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.133 13:46:14 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:12.133 13:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.133 13:46:14 -- common/autotest_common.sh@10 -- # set +x 00:16:12.133 13:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.133 13:46:14 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.133 13:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.134 13:46:14 -- common/autotest_common.sh@10 -- # set +x 00:16:12.134 [2024-07-11 13:46:14.424133] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.134 13:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.134 13:46:14 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:12.134 13:46:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.134 13:46:14 -- common/autotest_common.sh@10 -- # set +x 00:16:12.134 13:46:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.134 13:46:14 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:16:12.134 00:16:12.134 Discovery Log Number of Records 2, Generation counter 2 00:16:12.134 =====Discovery Log Entry 0====== 00:16:12.134 trtype: tcp 00:16:12.134 adrfam: ipv4 00:16:12.134 subtype: current discovery subsystem 00:16:12.134 treq: not required 00:16:12.134 portid: 0 00:16:12.134 trsvcid: 4420 00:16:12.134 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:12.134 traddr: 10.0.0.2 00:16:12.134 eflags: explicit discovery connections, duplicate discovery information 00:16:12.134 sectype: none 00:16:12.134 =====Discovery Log Entry 1====== 00:16:12.134 trtype: tcp 00:16:12.134 adrfam: ipv4 00:16:12.134 subtype: nvme subsystem 00:16:12.134 treq: not required 00:16:12.134 portid: 0 00:16:12.134 trsvcid: 4420 00:16:12.134 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:12.134 traddr: 10.0.0.2 00:16:12.134 eflags: none 00:16:12.134 sectype: none 00:16:12.134 13:46:14 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:12.134 13:46:14 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:12.134 13:46:14 -- nvmf/common.sh@510 -- # local dev _ 00:16:12.134 13:46:14 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:12.134 13:46:14 -- nvmf/common.sh@509 -- # nvme list 00:16:12.134 13:46:14 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:12.134 13:46:14 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:12.134 13:46:14 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:12.134 13:46:14 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:12.134 13:46:14 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:12.134 13:46:14 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:13.539 13:46:15 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:13.539 13:46:15 -- common/autotest_common.sh@1177 -- # local i=0 00:16:13.539 13:46:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.539 13:46:15 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:16:13.539 13:46:15 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:16:13.539 13:46:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:15.441 13:46:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:15.441 13:46:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:15.441 13:46:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.441 13:46:17 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:16:15.441 13:46:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.441 13:46:17 -- common/autotest_common.sh@1187 -- # return 0 00:16:15.441 13:46:17 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:15.441 13:46:17 -- nvmf/common.sh@510 -- # local dev _ 00:16:15.441 13:46:17 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.441 13:46:17 -- nvmf/common.sh@509 -- # nvme list 00:16:15.441 13:46:17 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:15.441 13:46:17 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.441 13:46:17 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:15.441 13:46:17 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.441 13:46:17 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:15.441 13:46:17 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:15.441 13:46:17 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.441 13:46:17 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:15.441 13:46:17 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:15.441 13:46:17 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.441 13:46:17 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:15.441 /dev/nvme0n1 ]] 00:16:15.441 13:46:17 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:15.441 13:46:17 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:15.441 13:46:17 -- nvmf/common.sh@510 -- # local dev _ 00:16:15.441 13:46:17 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.441 13:46:17 -- nvmf/common.sh@509 -- # nvme list 00:16:15.700 13:46:18 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:15.700 13:46:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.700 13:46:18 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:15.700 13:46:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.700 13:46:18 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:15.700 13:46:18 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:15.700 13:46:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.700 13:46:18 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:15.700 13:46:18 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:15.700 13:46:18 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.700 13:46:18 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:15.700 13:46:18 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.959 13:46:18 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:15.959 13:46:18 -- common/autotest_common.sh@1198 -- # local i=0 00:16:15.959 13:46:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:15.959 13:46:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.959 13:46:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:15.959 13:46:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.959 13:46:18 -- common/autotest_common.sh@1210 -- # return 0 00:16:15.959 13:46:18 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:15.959 13:46:18 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:15.959 13:46:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.959 13:46:18 -- common/autotest_common.sh@10 -- # set +x 00:16:15.959 13:46:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.959 13:46:18 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:15.959 13:46:18 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:15.959 13:46:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:15.959 13:46:18 -- nvmf/common.sh@116 -- # sync 00:16:15.959 13:46:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:15.959 13:46:18 -- nvmf/common.sh@119 -- # set +e 00:16:15.959 13:46:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:15.959 13:46:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:15.959 rmmod nvme_tcp 00:16:15.959 rmmod nvme_fabrics 00:16:15.959 rmmod nvme_keyring 00:16:15.959 13:46:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:15.959 13:46:18 -- nvmf/common.sh@123 -- # set -e 00:16:15.959 13:46:18 -- nvmf/common.sh@124 -- # return 0 00:16:15.959 13:46:18 -- nvmf/common.sh@477 -- # '[' -n 1554576 ']' 00:16:15.959 13:46:18 -- nvmf/common.sh@478 -- # killprocess 1554576 00:16:15.959 13:46:18 -- common/autotest_common.sh@926 -- # '[' -z 1554576 ']' 00:16:15.959 13:46:18 -- common/autotest_common.sh@930 -- # kill -0 1554576 00:16:15.959 13:46:18 -- common/autotest_common.sh@931 -- # uname 00:16:15.959 13:46:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:15.959 13:46:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1554576 00:16:15.959 13:46:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:15.959 13:46:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:15.959 13:46:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1554576' 00:16:15.959 killing process with pid 1554576 00:16:15.959 13:46:18 -- common/autotest_common.sh@945 -- # kill 1554576 00:16:15.959 13:46:18 -- common/autotest_common.sh@950 -- # wait 1554576 00:16:16.217 13:46:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:16.217 13:46:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:16.217 13:46:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:16.217 13:46:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.217 13:46:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:16.217 13:46:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.217 13:46:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.217 13:46:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.749 13:46:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:18.749 00:16:18.749 real 0m12.987s 00:16:18.749 user 0m21.452s 00:16:18.749 sys 0m4.840s 00:16:18.749 13:46:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:18.749 13:46:20 -- common/autotest_common.sh@10 -- # set +x 00:16:18.749 ************************************ 00:16:18.750 END TEST nvmf_nvme_cli 00:16:18.750 ************************************ 00:16:18.750 13:46:20 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:16:18.750 13:46:20 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:18.750 13:46:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:18.750 13:46:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:18.750 13:46:20 -- common/autotest_common.sh@10 -- # set +x 00:16:18.750 ************************************ 00:16:18.750 START TEST nvmf_vfio_user 00:16:18.750 ************************************ 00:16:18.750 13:46:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:18.750 * Looking for test storage... 00:16:18.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.750 13:46:20 -- nvmf/common.sh@7 -- # uname -s 00:16:18.750 13:46:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.750 13:46:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.750 13:46:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.750 13:46:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.750 13:46:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.750 13:46:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.750 13:46:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.750 13:46:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.750 13:46:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.750 13:46:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.750 13:46:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.750 13:46:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.750 13:46:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.750 13:46:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.750 13:46:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.750 13:46:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.750 13:46:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.750 13:46:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.750 13:46:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.750 13:46:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.750 13:46:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.750 13:46:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.750 13:46:20 -- paths/export.sh@5 -- # export PATH 00:16:18.750 13:46:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.750 13:46:20 -- nvmf/common.sh@46 -- # : 0 00:16:18.750 13:46:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:18.750 13:46:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:18.750 13:46:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:18.750 13:46:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.750 13:46:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.750 13:46:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:18.750 13:46:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:18.750 13:46:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1555883 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1555883' 00:16:18.750 Process pid: 1555883 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:18.750 13:46:20 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1555883 00:16:18.750 13:46:20 -- common/autotest_common.sh@819 -- # '[' -z 1555883 ']' 00:16:18.750 13:46:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.750 13:46:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:18.750 13:46:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.750 13:46:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:18.750 13:46:20 -- common/autotest_common.sh@10 -- # set +x 00:16:18.750 [2024-07-11 13:46:20.845791] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:18.750 [2024-07-11 13:46:20.845838] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.750 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.750 [2024-07-11 13:46:20.902237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.750 [2024-07-11 13:46:20.942411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:18.750 [2024-07-11 13:46:20.942531] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.750 [2024-07-11 13:46:20.942542] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.750 [2024-07-11 13:46:20.942549] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.750 [2024-07-11 13:46:20.942590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.750 [2024-07-11 13:46:20.942716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.750 [2024-07-11 13:46:20.942807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.750 [2024-07-11 13:46:20.942808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.316 13:46:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:19.316 13:46:21 -- common/autotest_common.sh@852 -- # return 0 00:16:19.316 13:46:21 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:20.250 13:46:22 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:20.508 13:46:22 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:20.508 13:46:22 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:20.508 13:46:22 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:20.508 13:46:22 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:20.508 13:46:22 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:20.766 Malloc1 00:16:20.766 13:46:23 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:21.025 13:46:23 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:21.025 13:46:23 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:21.282 13:46:23 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:21.282 13:46:23 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:21.282 13:46:23 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:21.539 Malloc2 00:16:21.539 13:46:23 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:21.539 13:46:23 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:21.797 13:46:24 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:22.057 13:46:24 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:22.057 13:46:24 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:22.057 13:46:24 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:22.057 13:46:24 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:22.057 13:46:24 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:22.057 13:46:24 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:22.057 [2024-07-11 13:46:24.328868] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:22.057 [2024-07-11 13:46:24.328911] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556593 ] 00:16:22.057 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.057 [2024-07-11 13:46:24.359646] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:22.057 [2024-07-11 13:46:24.361985] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:22.057 [2024-07-11 13:46:24.362002] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f285d613000 00:16:22.057 [2024-07-11 13:46:24.362985] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:22.057 [2024-07-11 13:46:24.363982] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:22.057 [2024-07-11 13:46:24.364991] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:22.057 [2024-07-11 13:46:24.365993] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:22.057 [2024-07-11 13:46:24.367005] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:22.057 [2024-07-11 13:46:24.368010] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:22.057 [2024-07-11 13:46:24.369022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:22.057 [2024-07-11 13:46:24.370025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:22.057 [2024-07-11 13:46:24.371042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:22.057 [2024-07-11 13:46:24.371051] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f285c3d9000 00:16:22.057 [2024-07-11 13:46:24.371992] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:22.057 [2024-07-11 13:46:24.380594] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:22.057 [2024-07-11 13:46:24.380613] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:22.057 [2024-07-11 13:46:24.386154] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:22.057 [2024-07-11 13:46:24.386193] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:22.057 [2024-07-11 13:46:24.386261] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:22.057 [2024-07-11 13:46:24.386279] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:22.057 [2024-07-11 13:46:24.386284] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:22.057 [2024-07-11 13:46:24.388164] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:22.057 [2024-07-11 13:46:24.388172] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:22.057 [2024-07-11 13:46:24.388178] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:22.057 [2024-07-11 13:46:24.389153] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:22.057 [2024-07-11 13:46:24.389164] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:22.057 [2024-07-11 13:46:24.389171] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:22.057 [2024-07-11 13:46:24.390154] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:22.057 [2024-07-11 13:46:24.390163] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:22.057 [2024-07-11 13:46:24.391162] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:22.057 [2024-07-11 13:46:24.391169] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:22.057 [2024-07-11 13:46:24.391173] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:22.057 [2024-07-11 13:46:24.391179] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:22.057 [2024-07-11 13:46:24.391283] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:22.057 [2024-07-11 13:46:24.391288] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:22.057 [2024-07-11 13:46:24.391292] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:22.057 [2024-07-11 13:46:24.392164] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:22.057 [2024-07-11 13:46:24.393339] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:22.057 [2024-07-11 13:46:24.394182] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:22.057 [2024-07-11 13:46:24.395207] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:22.057 [2024-07-11 13:46:24.396195] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:22.057 [2024-07-11 13:46:24.396203] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:22.057 [2024-07-11 13:46:24.396207] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:22.057 [2024-07-11 13:46:24.396224] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:22.058 [2024-07-11 13:46:24.396235] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396247] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:22.058 [2024-07-11 13:46:24.396251] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:22.058 [2024-07-11 13:46:24.396263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396319] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:22.058 [2024-07-11 13:46:24.396324] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:22.058 [2024-07-11 13:46:24.396327] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:22.058 [2024-07-11 13:46:24.396333] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:22.058 [2024-07-11 13:46:24.396338] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:22.058 [2024-07-11 13:46:24.396342] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:22.058 [2024-07-11 13:46:24.396346] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396354] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.058 [2024-07-11 13:46:24.396391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.058 [2024-07-11 13:46:24.396398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.058 [2024-07-11 13:46:24.396405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.058 [2024-07-11 13:46:24.396409] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396417] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396437] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:22.058 [2024-07-11 13:46:24.396442] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396447] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396455] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396525] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396531] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396537] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:22.058 [2024-07-11 13:46:24.396541] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:22.058 [2024-07-11 13:46:24.396550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396571] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:22.058 [2024-07-11 13:46:24.396578] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396585] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396591] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:22.058 [2024-07-11 13:46:24.396595] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:22.058 [2024-07-11 13:46:24.396600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396632] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396638] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396644] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:22.058 [2024-07-11 13:46:24.396648] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:22.058 [2024-07-11 13:46:24.396653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396674] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396680] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396687] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396692] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396696] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396700] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:22.058 [2024-07-11 13:46:24.396704] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:22.058 [2024-07-11 13:46:24.396709] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:22.058 [2024-07-11 13:46:24.396726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396789] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396811] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:22.058 [2024-07-11 13:46:24.396815] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:22.058 [2024-07-11 13:46:24.396819] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:22.058 [2024-07-11 13:46:24.396821] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:22.058 [2024-07-11 13:46:24.396827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:22.058 [2024-07-11 13:46:24.396833] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:22.058 [2024-07-11 13:46:24.396837] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:22.058 [2024-07-11 13:46:24.396842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396848] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:22.058 [2024-07-11 13:46:24.396852] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:22.058 [2024-07-11 13:46:24.396857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396863] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:22.058 [2024-07-11 13:46:24.396867] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:22.058 [2024-07-11 13:46:24.396872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:22.058 [2024-07-11 13:46:24.396879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:22.058 [2024-07-11 13:46:24.396904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:22.058 ===================================================== 00:16:22.058 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:22.058 ===================================================== 00:16:22.058 Controller Capabilities/Features 00:16:22.058 ================================ 00:16:22.058 Vendor ID: 4e58 00:16:22.058 Subsystem Vendor ID: 4e58 00:16:22.058 Serial Number: SPDK1 00:16:22.058 Model Number: SPDK bdev Controller 00:16:22.058 Firmware Version: 24.01.1 00:16:22.058 Recommended Arb Burst: 6 00:16:22.058 IEEE OUI Identifier: 8d 6b 50 00:16:22.058 Multi-path I/O 00:16:22.059 May have multiple subsystem ports: Yes 00:16:22.059 May have multiple controllers: Yes 00:16:22.059 Associated with SR-IOV VF: No 00:16:22.059 Max Data Transfer Size: 131072 00:16:22.059 Max Number of Namespaces: 32 00:16:22.059 Max Number of I/O Queues: 127 00:16:22.059 NVMe Specification Version (VS): 1.3 00:16:22.059 NVMe Specification Version (Identify): 1.3 00:16:22.059 Maximum Queue Entries: 256 00:16:22.059 Contiguous Queues Required: Yes 00:16:22.059 Arbitration Mechanisms Supported 00:16:22.059 Weighted Round Robin: Not Supported 00:16:22.059 Vendor Specific: Not Supported 00:16:22.059 Reset Timeout: 15000 ms 00:16:22.059 Doorbell Stride: 4 bytes 00:16:22.059 NVM Subsystem Reset: Not Supported 00:16:22.059 Command Sets Supported 00:16:22.059 NVM Command Set: Supported 00:16:22.059 Boot Partition: Not Supported 00:16:22.059 Memory Page Size Minimum: 4096 bytes 00:16:22.059 Memory Page Size Maximum: 4096 bytes 00:16:22.059 Persistent Memory Region: Not Supported 00:16:22.059 Optional Asynchronous Events Supported 00:16:22.059 Namespace Attribute Notices: Supported 00:16:22.059 Firmware Activation Notices: Not Supported 00:16:22.059 ANA Change Notices: Not Supported 00:16:22.059 PLE Aggregate Log Change Notices: Not Supported 00:16:22.059 LBA Status Info Alert Notices: Not Supported 00:16:22.059 EGE Aggregate Log Change Notices: Not Supported 00:16:22.059 Normal NVM Subsystem Shutdown event: Not Supported 00:16:22.059 Zone Descriptor Change Notices: Not Supported 00:16:22.059 Discovery Log Change Notices: Not Supported 00:16:22.059 Controller Attributes 00:16:22.059 128-bit Host Identifier: Supported 00:16:22.059 Non-Operational Permissive Mode: Not Supported 00:16:22.059 NVM Sets: Not Supported 00:16:22.059 Read Recovery Levels: Not Supported 00:16:22.059 Endurance Groups: Not Supported 00:16:22.059 Predictable Latency Mode: Not Supported 00:16:22.059 Traffic Based Keep ALive: Not Supported 00:16:22.059 Namespace Granularity: Not Supported 00:16:22.059 SQ Associations: Not Supported 00:16:22.059 UUID List: Not Supported 00:16:22.059 Multi-Domain Subsystem: Not Supported 00:16:22.059 Fixed Capacity Management: Not Supported 00:16:22.059 Variable Capacity Management: Not Supported 00:16:22.059 Delete Endurance Group: Not Supported 00:16:22.059 Delete NVM Set: Not Supported 00:16:22.059 Extended LBA Formats Supported: Not Supported 00:16:22.059 Flexible Data Placement Supported: Not Supported 00:16:22.059 00:16:22.059 Controller Memory Buffer Support 00:16:22.059 ================================ 00:16:22.059 Supported: No 00:16:22.059 00:16:22.059 Persistent Memory Region Support 00:16:22.059 ================================ 00:16:22.059 Supported: No 00:16:22.059 00:16:22.059 Admin Command Set Attributes 00:16:22.059 ============================ 00:16:22.059 Security Send/Receive: Not Supported 00:16:22.059 Format NVM: Not Supported 00:16:22.059 Firmware Activate/Download: Not Supported 00:16:22.059 Namespace Management: Not Supported 00:16:22.059 Device Self-Test: Not Supported 00:16:22.059 Directives: Not Supported 00:16:22.059 NVMe-MI: Not Supported 00:16:22.059 Virtualization Management: Not Supported 00:16:22.059 Doorbell Buffer Config: Not Supported 00:16:22.059 Get LBA Status Capability: Not Supported 00:16:22.059 Command & Feature Lockdown Capability: Not Supported 00:16:22.059 Abort Command Limit: 4 00:16:22.059 Async Event Request Limit: 4 00:16:22.059 Number of Firmware Slots: N/A 00:16:22.059 Firmware Slot 1 Read-Only: N/A 00:16:22.059 Firmware Activation Without Reset: N/A 00:16:22.059 Multiple Update Detection Support: N/A 00:16:22.059 Firmware Update Granularity: No Information Provided 00:16:22.059 Per-Namespace SMART Log: No 00:16:22.059 Asymmetric Namespace Access Log Page: Not Supported 00:16:22.059 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:22.059 Command Effects Log Page: Supported 00:16:22.059 Get Log Page Extended Data: Supported 00:16:22.059 Telemetry Log Pages: Not Supported 00:16:22.059 Persistent Event Log Pages: Not Supported 00:16:22.059 Supported Log Pages Log Page: May Support 00:16:22.059 Commands Supported & Effects Log Page: Not Supported 00:16:22.059 Feature Identifiers & Effects Log Page:May Support 00:16:22.059 NVMe-MI Commands & Effects Log Page: May Support 00:16:22.059 Data Area 4 for Telemetry Log: Not Supported 00:16:22.059 Error Log Page Entries Supported: 128 00:16:22.059 Keep Alive: Supported 00:16:22.059 Keep Alive Granularity: 10000 ms 00:16:22.059 00:16:22.059 NVM Command Set Attributes 00:16:22.059 ========================== 00:16:22.059 Submission Queue Entry Size 00:16:22.059 Max: 64 00:16:22.059 Min: 64 00:16:22.059 Completion Queue Entry Size 00:16:22.059 Max: 16 00:16:22.059 Min: 16 00:16:22.059 Number of Namespaces: 32 00:16:22.059 Compare Command: Supported 00:16:22.059 Write Uncorrectable Command: Not Supported 00:16:22.059 Dataset Management Command: Supported 00:16:22.059 Write Zeroes Command: Supported 00:16:22.059 Set Features Save Field: Not Supported 00:16:22.059 Reservations: Not Supported 00:16:22.059 Timestamp: Not Supported 00:16:22.059 Copy: Supported 00:16:22.059 Volatile Write Cache: Present 00:16:22.059 Atomic Write Unit (Normal): 1 00:16:22.059 Atomic Write Unit (PFail): 1 00:16:22.059 Atomic Compare & Write Unit: 1 00:16:22.059 Fused Compare & Write: Supported 00:16:22.059 Scatter-Gather List 00:16:22.059 SGL Command Set: Supported (Dword aligned) 00:16:22.059 SGL Keyed: Not Supported 00:16:22.059 SGL Bit Bucket Descriptor: Not Supported 00:16:22.059 SGL Metadata Pointer: Not Supported 00:16:22.059 Oversized SGL: Not Supported 00:16:22.059 SGL Metadata Address: Not Supported 00:16:22.059 SGL Offset: Not Supported 00:16:22.059 Transport SGL Data Block: Not Supported 00:16:22.059 Replay Protected Memory Block: Not Supported 00:16:22.059 00:16:22.059 Firmware Slot Information 00:16:22.059 ========================= 00:16:22.059 Active slot: 1 00:16:22.059 Slot 1 Firmware Revision: 24.01.1 00:16:22.059 00:16:22.059 00:16:22.059 Commands Supported and Effects 00:16:22.059 ============================== 00:16:22.059 Admin Commands 00:16:22.059 -------------- 00:16:22.059 Get Log Page (02h): Supported 00:16:22.059 Identify (06h): Supported 00:16:22.059 Abort (08h): Supported 00:16:22.059 Set Features (09h): Supported 00:16:22.059 Get Features (0Ah): Supported 00:16:22.059 Asynchronous Event Request (0Ch): Supported 00:16:22.059 Keep Alive (18h): Supported 00:16:22.059 I/O Commands 00:16:22.059 ------------ 00:16:22.059 Flush (00h): Supported LBA-Change 00:16:22.059 Write (01h): Supported LBA-Change 00:16:22.059 Read (02h): Supported 00:16:22.059 Compare (05h): Supported 00:16:22.059 Write Zeroes (08h): Supported LBA-Change 00:16:22.059 Dataset Management (09h): Supported LBA-Change 00:16:22.059 Copy (19h): Supported LBA-Change 00:16:22.059 Unknown (79h): Supported LBA-Change 00:16:22.059 Unknown (7Ah): Supported 00:16:22.059 00:16:22.059 Error Log 00:16:22.059 ========= 00:16:22.059 00:16:22.059 Arbitration 00:16:22.059 =========== 00:16:22.059 Arbitration Burst: 1 00:16:22.059 00:16:22.059 Power Management 00:16:22.059 ================ 00:16:22.059 Number of Power States: 1 00:16:22.059 Current Power State: Power State #0 00:16:22.059 Power State #0: 00:16:22.059 Max Power: 0.00 W 00:16:22.059 Non-Operational State: Operational 00:16:22.059 Entry Latency: Not Reported 00:16:22.059 Exit Latency: Not Reported 00:16:22.059 Relative Read Throughput: 0 00:16:22.059 Relative Read Latency: 0 00:16:22.059 Relative Write Throughput: 0 00:16:22.059 Relative Write Latency: 0 00:16:22.059 Idle Power: Not Reported 00:16:22.059 Active Power: Not Reported 00:16:22.059 Non-Operational Permissive Mode: Not Supported 00:16:22.059 00:16:22.059 Health Information 00:16:22.059 ================== 00:16:22.059 Critical Warnings: 00:16:22.059 Available Spare Space: OK 00:16:22.059 Temperature: OK 00:16:22.059 Device Reliability: OK 00:16:22.059 Read Only: No 00:16:22.059 Volatile Memory Backup: OK 00:16:22.059 Current Temperature: 0 Kelvin[2024-07-11 13:46:24.396998] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:22.059 [2024-07-11 13:46:24.397007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:22.059 [2024-07-11 13:46:24.397029] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:22.059 [2024-07-11 13:46:24.397037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.059 [2024-07-11 13:46:24.397044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.059 [2024-07-11 13:46:24.397049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.059 [2024-07-11 13:46:24.397055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.059 [2024-07-11 13:46:24.400165] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:22.059 [2024-07-11 13:46:24.400175] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:22.059 [2024-07-11 13:46:24.400249] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:22.059 [2024-07-11 13:46:24.400254] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:22.060 [2024-07-11 13:46:24.401229] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:22.060 [2024-07-11 13:46:24.401238] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:22.060 [2024-07-11 13:46:24.401284] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:22.060 [2024-07-11 13:46:24.402257] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:22.060 (-273 Celsius) 00:16:22.060 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:22.060 Available Spare: 0% 00:16:22.060 Available Spare Threshold: 0% 00:16:22.060 Life Percentage Used: 0% 00:16:22.060 Data Units Read: 0 00:16:22.060 Data Units Written: 0 00:16:22.060 Host Read Commands: 0 00:16:22.060 Host Write Commands: 0 00:16:22.060 Controller Busy Time: 0 minutes 00:16:22.060 Power Cycles: 0 00:16:22.060 Power On Hours: 0 hours 00:16:22.060 Unsafe Shutdowns: 0 00:16:22.060 Unrecoverable Media Errors: 0 00:16:22.060 Lifetime Error Log Entries: 0 00:16:22.060 Warning Temperature Time: 0 minutes 00:16:22.060 Critical Temperature Time: 0 minutes 00:16:22.060 00:16:22.060 Number of Queues 00:16:22.060 ================ 00:16:22.060 Number of I/O Submission Queues: 127 00:16:22.060 Number of I/O Completion Queues: 127 00:16:22.060 00:16:22.060 Active Namespaces 00:16:22.060 ================= 00:16:22.060 Namespace ID:1 00:16:22.060 Error Recovery Timeout: Unlimited 00:16:22.060 Command Set Identifier: NVM (00h) 00:16:22.060 Deallocate: Supported 00:16:22.060 Deallocated/Unwritten Error: Not Supported 00:16:22.060 Deallocated Read Value: Unknown 00:16:22.060 Deallocate in Write Zeroes: Not Supported 00:16:22.060 Deallocated Guard Field: 0xFFFF 00:16:22.060 Flush: Supported 00:16:22.060 Reservation: Supported 00:16:22.060 Namespace Sharing Capabilities: Multiple Controllers 00:16:22.060 Size (in LBAs): 131072 (0GiB) 00:16:22.060 Capacity (in LBAs): 131072 (0GiB) 00:16:22.060 Utilization (in LBAs): 131072 (0GiB) 00:16:22.060 NGUID: B6CDE80EE96E40CC93536A0C3B69641E 00:16:22.060 UUID: b6cde80e-e96e-40cc-9353-6a0c3b69641e 00:16:22.060 Thin Provisioning: Not Supported 00:16:22.060 Per-NS Atomic Units: Yes 00:16:22.060 Atomic Boundary Size (Normal): 0 00:16:22.060 Atomic Boundary Size (PFail): 0 00:16:22.060 Atomic Boundary Offset: 0 00:16:22.060 Maximum Single Source Range Length: 65535 00:16:22.060 Maximum Copy Length: 65535 00:16:22.060 Maximum Source Range Count: 1 00:16:22.060 NGUID/EUI64 Never Reused: No 00:16:22.060 Namespace Write Protected: No 00:16:22.060 Number of LBA Formats: 1 00:16:22.060 Current LBA Format: LBA Format #00 00:16:22.060 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:22.060 00:16:22.060 13:46:24 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:22.060 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.366 Initializing NVMe Controllers 00:16:27.366 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:27.366 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:27.366 Initialization complete. Launching workers. 00:16:27.366 ======================================================== 00:16:27.366 Latency(us) 00:16:27.366 Device Information : IOPS MiB/s Average min max 00:16:27.366 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39914.44 155.92 3206.47 986.41 6656.38 00:16:27.366 ======================================================== 00:16:27.366 Total : 39914.44 155.92 3206.47 986.41 6656.38 00:16:27.366 00:16:27.366 13:46:29 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:27.366 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.633 Initializing NVMe Controllers 00:16:32.633 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:32.633 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:32.633 Initialization complete. Launching workers. 00:16:32.633 ======================================================== 00:16:32.633 Latency(us) 00:16:32.633 Device Information : IOPS MiB/s Average min max 00:16:32.633 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.30 62.71 7978.33 6988.94 8972.30 00:16:32.633 ======================================================== 00:16:32.633 Total : 16054.30 62.71 7978.33 6988.94 8972.30 00:16:32.633 00:16:32.633 13:46:34 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:32.633 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.938 Initializing NVMe Controllers 00:16:37.938 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:37.938 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:37.938 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:37.938 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:37.938 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:37.938 Initialization complete. Launching workers. 00:16:37.938 Starting thread on core 2 00:16:37.938 Starting thread on core 3 00:16:37.938 Starting thread on core 1 00:16:37.938 13:46:40 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:37.938 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.228 Initializing NVMe Controllers 00:16:41.228 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:41.228 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:41.228 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:41.228 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:41.228 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:41.228 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:41.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:41.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:41.228 Initialization complete. Launching workers. 00:16:41.228 Starting thread on core 1 with urgent priority queue 00:16:41.228 Starting thread on core 2 with urgent priority queue 00:16:41.228 Starting thread on core 3 with urgent priority queue 00:16:41.228 Starting thread on core 0 with urgent priority queue 00:16:41.228 SPDK bdev Controller (SPDK1 ) core 0: 3816.33 IO/s 26.20 secs/100000 ios 00:16:41.228 SPDK bdev Controller (SPDK1 ) core 1: 3669.67 IO/s 27.25 secs/100000 ios 00:16:41.228 SPDK bdev Controller (SPDK1 ) core 2: 3879.67 IO/s 25.78 secs/100000 ios 00:16:41.228 SPDK bdev Controller (SPDK1 ) core 3: 4003.33 IO/s 24.98 secs/100000 ios 00:16:41.228 ======================================================== 00:16:41.228 00:16:41.228 13:46:43 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:41.228 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.485 Initializing NVMe Controllers 00:16:41.485 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:41.485 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:41.485 Namespace ID: 1 size: 0GB 00:16:41.485 Initialization complete. 00:16:41.485 INFO: using host memory buffer for IO 00:16:41.485 Hello world! 00:16:41.485 13:46:43 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:41.485 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.859 Initializing NVMe Controllers 00:16:42.859 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:42.859 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:42.859 Initialization complete. Launching workers. 00:16:42.859 submit (in ns) avg, min, max = 6772.6, 3189.6, 3999668.7 00:16:42.859 complete (in ns) avg, min, max = 20063.2, 1756.5, 4994297.4 00:16:42.859 00:16:42.859 Submit histogram 00:16:42.859 ================ 00:16:42.859 Range in us Cumulative Count 00:16:42.859 3.186 - 3.200: 0.0180% ( 3) 00:16:42.859 3.200 - 3.214: 0.0421% ( 4) 00:16:42.859 3.214 - 3.228: 0.0481% ( 1) 00:16:42.859 3.228 - 3.242: 0.5108% ( 77) 00:16:42.859 3.242 - 3.256: 2.6442% ( 355) 00:16:42.859 3.256 - 3.270: 6.1118% ( 577) 00:16:42.859 3.270 - 3.283: 10.1983% ( 680) 00:16:42.859 3.283 - 3.297: 15.3125% ( 851) 00:16:42.860 3.297 - 3.311: 21.1058% ( 964) 00:16:42.860 3.311 - 3.325: 26.5144% ( 900) 00:16:42.860 3.325 - 3.339: 32.3798% ( 976) 00:16:42.860 3.339 - 3.353: 38.0048% ( 936) 00:16:42.860 3.353 - 3.367: 42.5481% ( 756) 00:16:42.860 3.367 - 3.381: 46.8149% ( 710) 00:16:42.860 3.381 - 3.395: 52.2115% ( 898) 00:16:42.860 3.395 - 3.409: 59.2308% ( 1168) 00:16:42.860 3.409 - 3.423: 63.9603% ( 787) 00:16:42.860 3.423 - 3.437: 69.2668% ( 883) 00:16:42.860 3.437 - 3.450: 74.8678% ( 932) 00:16:42.860 3.450 - 3.464: 78.8642% ( 665) 00:16:42.860 3.464 - 3.478: 81.8570% ( 498) 00:16:42.860 3.478 - 3.492: 84.3149% ( 409) 00:16:42.860 3.492 - 3.506: 85.9075% ( 265) 00:16:42.860 3.506 - 3.520: 86.9171% ( 168) 00:16:42.860 3.520 - 3.534: 87.6442% ( 121) 00:16:42.860 3.534 - 3.548: 88.2031% ( 93) 00:16:42.860 3.548 - 3.562: 88.7200% ( 86) 00:16:42.860 3.562 - 3.590: 90.1142% ( 232) 00:16:42.860 3.590 - 3.617: 91.8510% ( 289) 00:16:42.860 3.617 - 3.645: 93.3834% ( 255) 00:16:42.860 3.645 - 3.673: 95.1803% ( 299) 00:16:42.860 3.673 - 3.701: 96.8630% ( 280) 00:16:42.860 3.701 - 3.729: 98.1070% ( 207) 00:16:42.860 3.729 - 3.757: 98.7861% ( 113) 00:16:42.860 3.757 - 3.784: 99.2548% ( 78) 00:16:42.860 3.784 - 3.812: 99.4952% ( 40) 00:16:42.860 3.812 - 3.840: 99.6274% ( 22) 00:16:42.860 3.840 - 3.868: 99.6695% ( 7) 00:16:42.860 3.868 - 3.896: 99.6815% ( 2) 00:16:42.860 5.231 - 5.259: 99.6875% ( 1) 00:16:42.860 5.259 - 5.287: 99.6935% ( 1) 00:16:42.860 5.287 - 5.315: 99.6995% ( 1) 00:16:42.860 5.370 - 5.398: 99.7115% ( 2) 00:16:42.860 5.426 - 5.454: 99.7236% ( 2) 00:16:42.860 5.510 - 5.537: 99.7356% ( 2) 00:16:42.860 5.565 - 5.593: 99.7476% ( 2) 00:16:42.860 5.677 - 5.704: 99.7536% ( 1) 00:16:42.860 5.788 - 5.816: 99.7596% ( 1) 00:16:42.860 5.843 - 5.871: 99.7656% ( 1) 00:16:42.860 5.871 - 5.899: 99.7776% ( 2) 00:16:42.860 6.066 - 6.094: 99.7837% ( 1) 00:16:42.860 6.150 - 6.177: 99.7897% ( 1) 00:16:42.860 6.344 - 6.372: 99.7957% ( 1) 00:16:42.860 6.372 - 6.400: 99.8017% ( 1) 00:16:42.860 6.456 - 6.483: 99.8137% ( 2) 00:16:42.860 6.567 - 6.595: 99.8197% ( 1) 00:16:42.860 6.650 - 6.678: 99.8257% ( 1) 00:16:42.860 6.817 - 6.845: 99.8317% ( 1) 00:16:42.860 6.873 - 6.901: 99.8377% ( 1) 00:16:42.860 6.984 - 7.012: 99.8498% ( 2) 00:16:42.860 7.012 - 7.040: 99.8558% ( 1) 00:16:42.860 7.068 - 7.096: 99.8618% ( 1) 00:16:42.860 7.096 - 7.123: 99.8738% ( 2) 00:16:42.860 7.123 - 7.179: 99.8798% ( 1) 00:16:42.860 7.179 - 7.235: 99.8858% ( 1) 00:16:42.860 7.290 - 7.346: 99.8918% ( 1) 00:16:42.860 7.402 - 7.457: 99.8978% ( 1) 00:16:42.860 7.736 - 7.791: 99.9099% ( 2) 00:16:42.860 7.791 - 7.847: 99.9159% ( 1) 00:16:42.860 3989.148 - 4017.642: 100.0000% ( 14) 00:16:42.860 00:16:42.860 Complete histogram 00:16:42.860 ================== 00:16:42.860 Range in us Cumulative Count 00:16:42.860 1.753 - 1.760: 0.0120% ( 2) 00:16:42.860 1.760 - 1.767: 0.1082% ( 16) 00:16:42.860 1.767 - 1.774: 0.2284% ( 20) 00:16:42.860 1.774 - 1.781: 0.2704% ( 7) 00:16:42.860 1.781 - 1.795: 0.3245% ( 9) 00:16:42.860 1.795 - 1.809: 5.2043% ( 812) 00:16:42.860 1.809 - 1.823: 43.0529% ( 6298) 00:16:42.860 1.823 - 1.837: 75.4567% ( 5392) 00:16:42.860 1.837 - 1.850: 82.5901% ( 1187) 00:16:42.860 1.850 - 1.864: 86.9050% ( 718) 00:16:42.860 1.864 - 1.878: 93.2993% ( 1064) 00:16:42.860 1.878 - 1.892: 96.4183% ( 519) 00:16:42.860 1.892 - 1.906: 98.0649% ( 274) 00:16:42.860 1.906 - 1.920: 99.0325% ( 161) 00:16:42.860 1.920 - 1.934: 99.2849% ( 42) 00:16:42.860 1.934 - 1.948: 99.3149% ( 5) 00:16:42.860 1.948 - 1.962: 99.3329% ( 3) 00:16:42.860 1.962 - 1.976: 99.3510% ( 3) 00:16:42.860 1.976 - 1.990: 99.3690% ( 3) 00:16:42.860 1.990 - 2.003: 99.3750% ( 1) 00:16:42.860 2.003 - 2.017: 99.3810% ( 1) 00:16:42.860 2.045 - 2.059: 99.3870% ( 1) 00:16:42.860 3.757 - 3.784: 99.3930% ( 1) 00:16:42.860 4.174 - 4.202: 99.3990% ( 1) 00:16:42.860 4.313 - 4.341: 99.4050% ( 1) 00:16:42.860 4.452 - 4.480: 99.4111% ( 1) 00:16:42.860 4.591 - 4.619: 99.4171% ( 1) 00:16:42.860 4.619 - 4.647: 99.4231% ( 1) 00:16:42.860 4.814 - 4.842: 99.4351% ( 2) 00:16:42.860 4.870 - 4.897: 99.4471% ( 2) 00:16:42.860 4.925 - 4.953: 99.4591% ( 2) 00:16:42.860 5.009 - 5.037: 99.4651% ( 1) 00:16:42.860 5.092 - 5.120: 99.4712% ( 1) 00:16:42.860 5.120 - 5.148: 99.4772% ( 1) 00:16:42.860 5.176 - 5.203: 99.4832% ( 1) 00:16:42.860 5.315 - 5.343: 99.4892% ( 1) 00:16:42.860 5.454 - 5.482: 99.4952% ( 1) 00:16:42.860 5.621 - 5.649: 99.5012% ( 1) 00:16:42.860 5.983 - 6.010: 99.5072% ( 1) 00:16:42.860 6.400 - 6.428: 99.5132% ( 1) 00:16:42.860 6.567 - 6.595: 99.5192% ( 1) 00:16:42.860 11.798 - 11.854: 99.5252% ( 1) 00:16:42.860 13.023 - 13.078: 99.5312% ( 1) 00:16:42.860 13.468 - 13.523: 99.5373% ( 1) 00:16:42.860 39.624 - 39.847: 99.5433% ( 1) 00:16:42.860 2464.723 - 2478.970: 99.5493% ( 1) 00:16:42.860 3162.824 - 3177.071: 99.5553% ( 1) 00:16:42.860 3989.148 - 4017.642: 99.9880% ( 72) 00:16:42.860 4986.435 - 5014.929: 100.0000% ( 2) 00:16:42.860 00:16:42.860 13:46:45 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:42.860 13:46:45 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:42.860 13:46:45 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:42.860 13:46:45 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:42.860 13:46:45 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:42.860 [2024-07-11 13:46:45.282868] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:42.860 [ 00:16:42.860 { 00:16:42.860 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:42.860 "subtype": "Discovery", 00:16:42.860 "listen_addresses": [], 00:16:42.860 "allow_any_host": true, 00:16:42.860 "hosts": [] 00:16:42.860 }, 00:16:42.860 { 00:16:42.860 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:42.860 "subtype": "NVMe", 00:16:42.860 "listen_addresses": [ 00:16:42.860 { 00:16:42.860 "transport": "VFIOUSER", 00:16:42.860 "trtype": "VFIOUSER", 00:16:42.860 "adrfam": "IPv4", 00:16:42.860 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:42.860 "trsvcid": "0" 00:16:42.860 } 00:16:42.860 ], 00:16:42.860 "allow_any_host": true, 00:16:42.860 "hosts": [], 00:16:42.860 "serial_number": "SPDK1", 00:16:42.860 "model_number": "SPDK bdev Controller", 00:16:42.860 "max_namespaces": 32, 00:16:42.860 "min_cntlid": 1, 00:16:42.860 "max_cntlid": 65519, 00:16:42.860 "namespaces": [ 00:16:42.860 { 00:16:42.860 "nsid": 1, 00:16:42.860 "bdev_name": "Malloc1", 00:16:42.860 "name": "Malloc1", 00:16:42.860 "nguid": "B6CDE80EE96E40CC93536A0C3B69641E", 00:16:42.860 "uuid": "b6cde80e-e96e-40cc-9353-6a0c3b69641e" 00:16:42.860 } 00:16:42.860 ] 00:16:42.860 }, 00:16:42.860 { 00:16:42.860 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:42.860 "subtype": "NVMe", 00:16:42.860 "listen_addresses": [ 00:16:42.860 { 00:16:42.860 "transport": "VFIOUSER", 00:16:42.860 "trtype": "VFIOUSER", 00:16:42.860 "adrfam": "IPv4", 00:16:42.860 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:42.860 "trsvcid": "0" 00:16:42.860 } 00:16:42.860 ], 00:16:42.860 "allow_any_host": true, 00:16:42.860 "hosts": [], 00:16:42.860 "serial_number": "SPDK2", 00:16:42.860 "model_number": "SPDK bdev Controller", 00:16:42.860 "max_namespaces": 32, 00:16:42.860 "min_cntlid": 1, 00:16:42.860 "max_cntlid": 65519, 00:16:42.860 "namespaces": [ 00:16:42.860 { 00:16:42.860 "nsid": 1, 00:16:42.860 "bdev_name": "Malloc2", 00:16:42.860 "name": "Malloc2", 00:16:42.860 "nguid": "F9C4204C85084B328504789AA1B8A85C", 00:16:42.860 "uuid": "f9c4204c-8508-4b32-8504-789aa1b8a85c" 00:16:42.860 } 00:16:42.860 ] 00:16:42.860 } 00:16:42.860 ] 00:16:42.860 13:46:45 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:42.860 13:46:45 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:42.860 13:46:45 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1560109 00:16:42.860 13:46:45 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:42.860 13:46:45 -- common/autotest_common.sh@1244 -- # local i=0 00:16:43.119 13:46:45 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:43.119 13:46:45 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:43.119 13:46:45 -- common/autotest_common.sh@1255 -- # return 0 00:16:43.119 13:46:45 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:43.119 13:46:45 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:43.119 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.119 Malloc3 00:16:43.119 13:46:45 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:43.377 13:46:45 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:43.377 Asynchronous Event Request test 00:16:43.377 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:43.377 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:43.377 Registering asynchronous event callbacks... 00:16:43.377 Starting namespace attribute notice tests for all controllers... 00:16:43.377 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:43.377 aer_cb - Changed Namespace 00:16:43.377 Cleaning up... 00:16:43.377 [ 00:16:43.377 { 00:16:43.377 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:43.377 "subtype": "Discovery", 00:16:43.377 "listen_addresses": [], 00:16:43.377 "allow_any_host": true, 00:16:43.377 "hosts": [] 00:16:43.377 }, 00:16:43.377 { 00:16:43.378 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:43.378 "subtype": "NVMe", 00:16:43.378 "listen_addresses": [ 00:16:43.378 { 00:16:43.378 "transport": "VFIOUSER", 00:16:43.378 "trtype": "VFIOUSER", 00:16:43.378 "adrfam": "IPv4", 00:16:43.378 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:43.378 "trsvcid": "0" 00:16:43.378 } 00:16:43.378 ], 00:16:43.378 "allow_any_host": true, 00:16:43.378 "hosts": [], 00:16:43.378 "serial_number": "SPDK1", 00:16:43.378 "model_number": "SPDK bdev Controller", 00:16:43.378 "max_namespaces": 32, 00:16:43.378 "min_cntlid": 1, 00:16:43.378 "max_cntlid": 65519, 00:16:43.378 "namespaces": [ 00:16:43.378 { 00:16:43.378 "nsid": 1, 00:16:43.378 "bdev_name": "Malloc1", 00:16:43.378 "name": "Malloc1", 00:16:43.378 "nguid": "B6CDE80EE96E40CC93536A0C3B69641E", 00:16:43.378 "uuid": "b6cde80e-e96e-40cc-9353-6a0c3b69641e" 00:16:43.378 }, 00:16:43.378 { 00:16:43.378 "nsid": 2, 00:16:43.378 "bdev_name": "Malloc3", 00:16:43.378 "name": "Malloc3", 00:16:43.378 "nguid": "76DE07D3D05E4027BEA7BDD691FD8B44", 00:16:43.378 "uuid": "76de07d3-d05e-4027-bea7-bdd691fd8b44" 00:16:43.378 } 00:16:43.378 ] 00:16:43.378 }, 00:16:43.378 { 00:16:43.378 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:43.378 "subtype": "NVMe", 00:16:43.378 "listen_addresses": [ 00:16:43.378 { 00:16:43.378 "transport": "VFIOUSER", 00:16:43.378 "trtype": "VFIOUSER", 00:16:43.378 "adrfam": "IPv4", 00:16:43.378 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:43.378 "trsvcid": "0" 00:16:43.378 } 00:16:43.378 ], 00:16:43.378 "allow_any_host": true, 00:16:43.378 "hosts": [], 00:16:43.378 "serial_number": "SPDK2", 00:16:43.378 "model_number": "SPDK bdev Controller", 00:16:43.378 "max_namespaces": 32, 00:16:43.378 "min_cntlid": 1, 00:16:43.378 "max_cntlid": 65519, 00:16:43.378 "namespaces": [ 00:16:43.378 { 00:16:43.378 "nsid": 1, 00:16:43.378 "bdev_name": "Malloc2", 00:16:43.378 "name": "Malloc2", 00:16:43.378 "nguid": "F9C4204C85084B328504789AA1B8A85C", 00:16:43.378 "uuid": "f9c4204c-8508-4b32-8504-789aa1b8a85c" 00:16:43.378 } 00:16:43.378 ] 00:16:43.378 } 00:16:43.378 ] 00:16:43.637 13:46:45 -- target/nvmf_vfio_user.sh@44 -- # wait 1560109 00:16:43.637 13:46:45 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:43.637 13:46:45 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:43.637 13:46:45 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:43.637 13:46:45 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:43.637 [2024-07-11 13:46:45.874467] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:43.637 [2024-07-11 13:46:45.874499] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1560121 ] 00:16:43.637 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.637 [2024-07-11 13:46:45.905536] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:43.637 [2024-07-11 13:46:45.916980] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:43.637 [2024-07-11 13:46:45.917000] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0cc4074000 00:16:43.637 [2024-07-11 13:46:45.917979] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:43.637 [2024-07-11 13:46:45.918979] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:43.637 [2024-07-11 13:46:45.919993] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:43.637 [2024-07-11 13:46:45.921006] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:43.637 [2024-07-11 13:46:45.922013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:43.637 [2024-07-11 13:46:45.923022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:43.637 [2024-07-11 13:46:45.924028] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:43.637 [2024-07-11 13:46:45.925038] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:43.637 [2024-07-11 13:46:45.926047] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:43.637 [2024-07-11 13:46:45.926057] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0cc2e3a000 00:16:43.637 [2024-07-11 13:46:45.927001] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:43.637 [2024-07-11 13:46:45.934524] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:43.637 [2024-07-11 13:46:45.934541] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:43.637 [2024-07-11 13:46:45.939632] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:43.637 [2024-07-11 13:46:45.939667] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:43.637 [2024-07-11 13:46:45.939731] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:43.637 [2024-07-11 13:46:45.939745] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:43.637 [2024-07-11 13:46:45.939749] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:43.637 [2024-07-11 13:46:45.940632] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:43.637 [2024-07-11 13:46:45.940641] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:43.637 [2024-07-11 13:46:45.940647] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:43.637 [2024-07-11 13:46:45.941643] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:43.637 [2024-07-11 13:46:45.941651] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:43.637 [2024-07-11 13:46:45.941657] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:43.637 [2024-07-11 13:46:45.942648] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:43.637 [2024-07-11 13:46:45.942656] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:43.637 [2024-07-11 13:46:45.943653] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:43.637 [2024-07-11 13:46:45.943661] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:43.637 [2024-07-11 13:46:45.943665] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:43.637 [2024-07-11 13:46:45.943671] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:43.637 [2024-07-11 13:46:45.943776] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:43.637 [2024-07-11 13:46:45.943780] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:43.637 [2024-07-11 13:46:45.943785] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:43.637 [2024-07-11 13:46:45.944659] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:43.637 [2024-07-11 13:46:45.945663] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:43.637 [2024-07-11 13:46:45.946676] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:43.637 [2024-07-11 13:46:45.947697] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:43.637 [2024-07-11 13:46:45.948685] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:43.637 [2024-07-11 13:46:45.948694] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:43.637 [2024-07-11 13:46:45.948698] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.948715] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:43.638 [2024-07-11 13:46:45.948724] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.948733] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:43.638 [2024-07-11 13:46:45.948737] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:43.638 [2024-07-11 13:46:45.948748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:45.956167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:45.956179] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:43.638 [2024-07-11 13:46:45.956183] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:43.638 [2024-07-11 13:46:45.956187] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:43.638 [2024-07-11 13:46:45.956191] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:43.638 [2024-07-11 13:46:45.956195] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:43.638 [2024-07-11 13:46:45.956200] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:43.638 [2024-07-11 13:46:45.956204] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.956212] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.956222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:45.964165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:45.964181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.638 [2024-07-11 13:46:45.964188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.638 [2024-07-11 13:46:45.964196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.638 [2024-07-11 13:46:45.964203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.638 [2024-07-11 13:46:45.964209] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.964217] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.964225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:45.972164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:45.972172] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:43.638 [2024-07-11 13:46:45.972176] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.972182] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.972199] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.972207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:45.980165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:45.980219] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.980227] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.980233] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:43.638 [2024-07-11 13:46:45.980237] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:43.638 [2024-07-11 13:46:45.980243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:45.988166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:45.988181] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:43.638 [2024-07-11 13:46:45.988188] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.988195] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.988201] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:43.638 [2024-07-11 13:46:45.988205] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:43.638 [2024-07-11 13:46:45.988211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:45.996166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:45.996185] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.996194] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:45.996203] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:43.638 [2024-07-11 13:46:45.996207] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:43.638 [2024-07-11 13:46:45.996213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:46.004167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:46.004178] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:46.004184] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:46.004192] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:46.004197] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:46.004201] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:46.004206] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:43.638 [2024-07-11 13:46:46.004209] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:43.638 [2024-07-11 13:46:46.004213] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:43.638 [2024-07-11 13:46:46.004227] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:46.012165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:46.012180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:46.020165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:46.020178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:46.028165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:46.028177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:46.036166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:46.036179] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:43.638 [2024-07-11 13:46:46.036184] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:43.638 [2024-07-11 13:46:46.036187] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:43.638 [2024-07-11 13:46:46.036190] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:43.638 [2024-07-11 13:46:46.036195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:43.638 [2024-07-11 13:46:46.036201] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:43.638 [2024-07-11 13:46:46.036205] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:43.638 [2024-07-11 13:46:46.036214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:46.036220] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:43.638 [2024-07-11 13:46:46.036224] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:43.638 [2024-07-11 13:46:46.036229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:46.036235] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:43.638 [2024-07-11 13:46:46.036239] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:43.638 [2024-07-11 13:46:46.036244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:43.638 [2024-07-11 13:46:46.044166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:46.044183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:46.044191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:43.638 [2024-07-11 13:46:46.044197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:43.638 ===================================================== 00:16:43.638 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:43.638 ===================================================== 00:16:43.638 Controller Capabilities/Features 00:16:43.638 ================================ 00:16:43.638 Vendor ID: 4e58 00:16:43.638 Subsystem Vendor ID: 4e58 00:16:43.638 Serial Number: SPDK2 00:16:43.638 Model Number: SPDK bdev Controller 00:16:43.638 Firmware Version: 24.01.1 00:16:43.638 Recommended Arb Burst: 6 00:16:43.638 IEEE OUI Identifier: 8d 6b 50 00:16:43.638 Multi-path I/O 00:16:43.638 May have multiple subsystem ports: Yes 00:16:43.638 May have multiple controllers: Yes 00:16:43.638 Associated with SR-IOV VF: No 00:16:43.638 Max Data Transfer Size: 131072 00:16:43.638 Max Number of Namespaces: 32 00:16:43.638 Max Number of I/O Queues: 127 00:16:43.638 NVMe Specification Version (VS): 1.3 00:16:43.638 NVMe Specification Version (Identify): 1.3 00:16:43.638 Maximum Queue Entries: 256 00:16:43.638 Contiguous Queues Required: Yes 00:16:43.638 Arbitration Mechanisms Supported 00:16:43.638 Weighted Round Robin: Not Supported 00:16:43.638 Vendor Specific: Not Supported 00:16:43.638 Reset Timeout: 15000 ms 00:16:43.638 Doorbell Stride: 4 bytes 00:16:43.638 NVM Subsystem Reset: Not Supported 00:16:43.638 Command Sets Supported 00:16:43.638 NVM Command Set: Supported 00:16:43.638 Boot Partition: Not Supported 00:16:43.638 Memory Page Size Minimum: 4096 bytes 00:16:43.638 Memory Page Size Maximum: 4096 bytes 00:16:43.638 Persistent Memory Region: Not Supported 00:16:43.638 Optional Asynchronous Events Supported 00:16:43.638 Namespace Attribute Notices: Supported 00:16:43.638 Firmware Activation Notices: Not Supported 00:16:43.638 ANA Change Notices: Not Supported 00:16:43.638 PLE Aggregate Log Change Notices: Not Supported 00:16:43.638 LBA Status Info Alert Notices: Not Supported 00:16:43.638 EGE Aggregate Log Change Notices: Not Supported 00:16:43.638 Normal NVM Subsystem Shutdown event: Not Supported 00:16:43.638 Zone Descriptor Change Notices: Not Supported 00:16:43.638 Discovery Log Change Notices: Not Supported 00:16:43.638 Controller Attributes 00:16:43.638 128-bit Host Identifier: Supported 00:16:43.638 Non-Operational Permissive Mode: Not Supported 00:16:43.638 NVM Sets: Not Supported 00:16:43.638 Read Recovery Levels: Not Supported 00:16:43.638 Endurance Groups: Not Supported 00:16:43.638 Predictable Latency Mode: Not Supported 00:16:43.638 Traffic Based Keep ALive: Not Supported 00:16:43.638 Namespace Granularity: Not Supported 00:16:43.638 SQ Associations: Not Supported 00:16:43.638 UUID List: Not Supported 00:16:43.638 Multi-Domain Subsystem: Not Supported 00:16:43.638 Fixed Capacity Management: Not Supported 00:16:43.638 Variable Capacity Management: Not Supported 00:16:43.638 Delete Endurance Group: Not Supported 00:16:43.638 Delete NVM Set: Not Supported 00:16:43.638 Extended LBA Formats Supported: Not Supported 00:16:43.638 Flexible Data Placement Supported: Not Supported 00:16:43.638 00:16:43.638 Controller Memory Buffer Support 00:16:43.638 ================================ 00:16:43.638 Supported: No 00:16:43.638 00:16:43.638 Persistent Memory Region Support 00:16:43.638 ================================ 00:16:43.638 Supported: No 00:16:43.638 00:16:43.638 Admin Command Set Attributes 00:16:43.638 ============================ 00:16:43.638 Security Send/Receive: Not Supported 00:16:43.638 Format NVM: Not Supported 00:16:43.638 Firmware Activate/Download: Not Supported 00:16:43.638 Namespace Management: Not Supported 00:16:43.639 Device Self-Test: Not Supported 00:16:43.639 Directives: Not Supported 00:16:43.639 NVMe-MI: Not Supported 00:16:43.639 Virtualization Management: Not Supported 00:16:43.639 Doorbell Buffer Config: Not Supported 00:16:43.639 Get LBA Status Capability: Not Supported 00:16:43.639 Command & Feature Lockdown Capability: Not Supported 00:16:43.639 Abort Command Limit: 4 00:16:43.639 Async Event Request Limit: 4 00:16:43.639 Number of Firmware Slots: N/A 00:16:43.639 Firmware Slot 1 Read-Only: N/A 00:16:43.639 Firmware Activation Without Reset: N/A 00:16:43.639 Multiple Update Detection Support: N/A 00:16:43.639 Firmware Update Granularity: No Information Provided 00:16:43.639 Per-Namespace SMART Log: No 00:16:43.639 Asymmetric Namespace Access Log Page: Not Supported 00:16:43.639 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:43.639 Command Effects Log Page: Supported 00:16:43.639 Get Log Page Extended Data: Supported 00:16:43.639 Telemetry Log Pages: Not Supported 00:16:43.639 Persistent Event Log Pages: Not Supported 00:16:43.639 Supported Log Pages Log Page: May Support 00:16:43.639 Commands Supported & Effects Log Page: Not Supported 00:16:43.639 Feature Identifiers & Effects Log Page:May Support 00:16:43.639 NVMe-MI Commands & Effects Log Page: May Support 00:16:43.639 Data Area 4 for Telemetry Log: Not Supported 00:16:43.639 Error Log Page Entries Supported: 128 00:16:43.639 Keep Alive: Supported 00:16:43.639 Keep Alive Granularity: 10000 ms 00:16:43.639 00:16:43.639 NVM Command Set Attributes 00:16:43.639 ========================== 00:16:43.639 Submission Queue Entry Size 00:16:43.639 Max: 64 00:16:43.639 Min: 64 00:16:43.639 Completion Queue Entry Size 00:16:43.639 Max: 16 00:16:43.639 Min: 16 00:16:43.639 Number of Namespaces: 32 00:16:43.639 Compare Command: Supported 00:16:43.639 Write Uncorrectable Command: Not Supported 00:16:43.639 Dataset Management Command: Supported 00:16:43.639 Write Zeroes Command: Supported 00:16:43.639 Set Features Save Field: Not Supported 00:16:43.639 Reservations: Not Supported 00:16:43.639 Timestamp: Not Supported 00:16:43.639 Copy: Supported 00:16:43.639 Volatile Write Cache: Present 00:16:43.639 Atomic Write Unit (Normal): 1 00:16:43.639 Atomic Write Unit (PFail): 1 00:16:43.639 Atomic Compare & Write Unit: 1 00:16:43.639 Fused Compare & Write: Supported 00:16:43.639 Scatter-Gather List 00:16:43.639 SGL Command Set: Supported (Dword aligned) 00:16:43.639 SGL Keyed: Not Supported 00:16:43.639 SGL Bit Bucket Descriptor: Not Supported 00:16:43.639 SGL Metadata Pointer: Not Supported 00:16:43.639 Oversized SGL: Not Supported 00:16:43.639 SGL Metadata Address: Not Supported 00:16:43.639 SGL Offset: Not Supported 00:16:43.639 Transport SGL Data Block: Not Supported 00:16:43.639 Replay Protected Memory Block: Not Supported 00:16:43.639 00:16:43.639 Firmware Slot Information 00:16:43.639 ========================= 00:16:43.639 Active slot: 1 00:16:43.639 Slot 1 Firmware Revision: 24.01.1 00:16:43.639 00:16:43.639 00:16:43.639 Commands Supported and Effects 00:16:43.639 ============================== 00:16:43.639 Admin Commands 00:16:43.639 -------------- 00:16:43.639 Get Log Page (02h): Supported 00:16:43.639 Identify (06h): Supported 00:16:43.639 Abort (08h): Supported 00:16:43.639 Set Features (09h): Supported 00:16:43.639 Get Features (0Ah): Supported 00:16:43.639 Asynchronous Event Request (0Ch): Supported 00:16:43.639 Keep Alive (18h): Supported 00:16:43.639 I/O Commands 00:16:43.639 ------------ 00:16:43.639 Flush (00h): Supported LBA-Change 00:16:43.639 Write (01h): Supported LBA-Change 00:16:43.639 Read (02h): Supported 00:16:43.639 Compare (05h): Supported 00:16:43.639 Write Zeroes (08h): Supported LBA-Change 00:16:43.639 Dataset Management (09h): Supported LBA-Change 00:16:43.639 Copy (19h): Supported LBA-Change 00:16:43.639 Unknown (79h): Supported LBA-Change 00:16:43.639 Unknown (7Ah): Supported 00:16:43.639 00:16:43.639 Error Log 00:16:43.639 ========= 00:16:43.639 00:16:43.639 Arbitration 00:16:43.639 =========== 00:16:43.639 Arbitration Burst: 1 00:16:43.639 00:16:43.639 Power Management 00:16:43.639 ================ 00:16:43.639 Number of Power States: 1 00:16:43.639 Current Power State: Power State #0 00:16:43.639 Power State #0: 00:16:43.639 Max Power: 0.00 W 00:16:43.639 Non-Operational State: Operational 00:16:43.639 Entry Latency: Not Reported 00:16:43.639 Exit Latency: Not Reported 00:16:43.639 Relative Read Throughput: 0 00:16:43.639 Relative Read Latency: 0 00:16:43.639 Relative Write Throughput: 0 00:16:43.639 Relative Write Latency: 0 00:16:43.639 Idle Power: Not Reported 00:16:43.639 Active Power: Not Reported 00:16:43.639 Non-Operational Permissive Mode: Not Supported 00:16:43.639 00:16:43.639 Health Information 00:16:43.639 ================== 00:16:43.639 Critical Warnings: 00:16:43.639 Available Spare Space: OK 00:16:43.639 Temperature: OK 00:16:43.639 Device Reliability: OK 00:16:43.639 Read Only: No 00:16:43.639 Volatile Memory Backup: OK 00:16:43.639 Current Temperature: 0 Kelvin[2024-07-11 13:46:46.044286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:43.639 [2024-07-11 13:46:46.052167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:43.639 [2024-07-11 13:46:46.052196] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:43.639 [2024-07-11 13:46:46.052204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.639 [2024-07-11 13:46:46.052209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.639 [2024-07-11 13:46:46.052215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.639 [2024-07-11 13:46:46.052220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.639 [2024-07-11 13:46:46.052272] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:43.639 [2024-07-11 13:46:46.052282] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:43.639 [2024-07-11 13:46:46.053303] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:43.639 [2024-07-11 13:46:46.053309] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:43.639 [2024-07-11 13:46:46.054286] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:43.639 [2024-07-11 13:46:46.054297] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:43.639 [2024-07-11 13:46:46.054341] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:43.639 [2024-07-11 13:46:46.057166] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:43.639 (-273 Celsius) 00:16:43.639 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:43.639 Available Spare: 0% 00:16:43.639 Available Spare Threshold: 0% 00:16:43.639 Life Percentage Used: 0% 00:16:43.639 Data Units Read: 0 00:16:43.639 Data Units Written: 0 00:16:43.639 Host Read Commands: 0 00:16:43.639 Host Write Commands: 0 00:16:43.639 Controller Busy Time: 0 minutes 00:16:43.639 Power Cycles: 0 00:16:43.639 Power On Hours: 0 hours 00:16:43.639 Unsafe Shutdowns: 0 00:16:43.639 Unrecoverable Media Errors: 0 00:16:43.639 Lifetime Error Log Entries: 0 00:16:43.639 Warning Temperature Time: 0 minutes 00:16:43.639 Critical Temperature Time: 0 minutes 00:16:43.639 00:16:43.639 Number of Queues 00:16:43.639 ================ 00:16:43.639 Number of I/O Submission Queues: 127 00:16:43.639 Number of I/O Completion Queues: 127 00:16:43.639 00:16:43.639 Active Namespaces 00:16:43.639 ================= 00:16:43.639 Namespace ID:1 00:16:43.639 Error Recovery Timeout: Unlimited 00:16:43.639 Command Set Identifier: NVM (00h) 00:16:43.639 Deallocate: Supported 00:16:43.639 Deallocated/Unwritten Error: Not Supported 00:16:43.639 Deallocated Read Value: Unknown 00:16:43.639 Deallocate in Write Zeroes: Not Supported 00:16:43.639 Deallocated Guard Field: 0xFFFF 00:16:43.639 Flush: Supported 00:16:43.639 Reservation: Supported 00:16:43.639 Namespace Sharing Capabilities: Multiple Controllers 00:16:43.639 Size (in LBAs): 131072 (0GiB) 00:16:43.639 Capacity (in LBAs): 131072 (0GiB) 00:16:43.639 Utilization (in LBAs): 131072 (0GiB) 00:16:43.639 NGUID: F9C4204C85084B328504789AA1B8A85C 00:16:43.639 UUID: f9c4204c-8508-4b32-8504-789aa1b8a85c 00:16:43.639 Thin Provisioning: Not Supported 00:16:43.639 Per-NS Atomic Units: Yes 00:16:43.639 Atomic Boundary Size (Normal): 0 00:16:43.639 Atomic Boundary Size (PFail): 0 00:16:43.639 Atomic Boundary Offset: 0 00:16:43.639 Maximum Single Source Range Length: 65535 00:16:43.639 Maximum Copy Length: 65535 00:16:43.639 Maximum Source Range Count: 1 00:16:43.639 NGUID/EUI64 Never Reused: No 00:16:43.639 Namespace Write Protected: No 00:16:43.639 Number of LBA Formats: 1 00:16:43.639 Current LBA Format: LBA Format #00 00:16:43.639 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:43.639 00:16:43.897 13:46:46 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:43.897 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.162 Initializing NVMe Controllers 00:16:49.162 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:49.162 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:49.162 Initialization complete. Launching workers. 00:16:49.162 ======================================================== 00:16:49.162 Latency(us) 00:16:49.162 Device Information : IOPS MiB/s Average min max 00:16:49.162 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39867.34 155.73 3210.25 979.18 6813.95 00:16:49.162 ======================================================== 00:16:49.162 Total : 39867.34 155.73 3210.25 979.18 6813.95 00:16:49.162 00:16:49.162 13:46:51 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:49.162 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.432 Initializing NVMe Controllers 00:16:54.432 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:54.432 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:54.432 Initialization complete. Launching workers. 00:16:54.432 ======================================================== 00:16:54.432 Latency(us) 00:16:54.432 Device Information : IOPS MiB/s Average min max 00:16:54.432 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39911.06 155.90 3206.72 949.43 6693.65 00:16:54.432 ======================================================== 00:16:54.432 Total : 39911.06 155.90 3206.72 949.43 6693.65 00:16:54.432 00:16:54.432 13:46:56 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:54.432 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.703 Initializing NVMe Controllers 00:16:59.703 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:59.704 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:59.704 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:59.704 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:59.704 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:59.704 Initialization complete. Launching workers. 00:16:59.704 Starting thread on core 2 00:16:59.704 Starting thread on core 3 00:16:59.704 Starting thread on core 1 00:16:59.704 13:47:01 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:59.704 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.991 Initializing NVMe Controllers 00:17:02.991 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:02.991 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:02.991 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:02.991 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:02.991 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:02.991 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:02.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:02.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:02.991 Initialization complete. Launching workers. 00:17:02.991 Starting thread on core 1 with urgent priority queue 00:17:02.991 Starting thread on core 2 with urgent priority queue 00:17:02.991 Starting thread on core 3 with urgent priority queue 00:17:02.991 Starting thread on core 0 with urgent priority queue 00:17:02.991 SPDK bdev Controller (SPDK2 ) core 0: 9395.33 IO/s 10.64 secs/100000 ios 00:17:02.991 SPDK bdev Controller (SPDK2 ) core 1: 9072.67 IO/s 11.02 secs/100000 ios 00:17:02.991 SPDK bdev Controller (SPDK2 ) core 2: 7645.00 IO/s 13.08 secs/100000 ios 00:17:02.991 SPDK bdev Controller (SPDK2 ) core 3: 9079.33 IO/s 11.01 secs/100000 ios 00:17:02.991 ======================================================== 00:17:02.991 00:17:02.991 13:47:05 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:02.991 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.280 Initializing NVMe Controllers 00:17:03.280 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:03.280 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:03.280 Namespace ID: 1 size: 0GB 00:17:03.280 Initialization complete. 00:17:03.280 INFO: using host memory buffer for IO 00:17:03.280 Hello world! 00:17:03.280 13:47:05 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:03.280 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.663 Initializing NVMe Controllers 00:17:04.663 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:04.663 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:04.663 Initialization complete. Launching workers. 00:17:04.663 submit (in ns) avg, min, max = 6517.2, 3200.9, 3998181.7 00:17:04.663 complete (in ns) avg, min, max = 21459.1, 1794.8, 6989947.0 00:17:04.663 00:17:04.663 Submit histogram 00:17:04.663 ================ 00:17:04.663 Range in us Cumulative Count 00:17:04.663 3.200 - 3.214: 0.0299% ( 5) 00:17:04.663 3.214 - 3.228: 0.2875% ( 43) 00:17:04.663 3.228 - 3.242: 1.9105% ( 271) 00:17:04.663 3.242 - 3.256: 4.7434% ( 473) 00:17:04.663 3.256 - 3.270: 8.5644% ( 638) 00:17:04.663 3.270 - 3.283: 13.4755% ( 820) 00:17:04.663 3.283 - 3.297: 19.3687% ( 984) 00:17:04.663 3.297 - 3.311: 25.1482% ( 965) 00:17:04.663 3.311 - 3.325: 30.8858% ( 958) 00:17:04.663 3.325 - 3.339: 36.4077% ( 922) 00:17:04.663 3.339 - 3.353: 41.2649% ( 811) 00:17:04.663 3.353 - 3.367: 45.4453% ( 698) 00:17:04.663 3.367 - 3.381: 49.7215% ( 714) 00:17:04.663 3.381 - 3.395: 55.8004% ( 1015) 00:17:04.663 3.395 - 3.409: 60.5378% ( 791) 00:17:04.663 3.409 - 3.423: 65.2512% ( 787) 00:17:04.663 3.423 - 3.437: 71.1026% ( 977) 00:17:04.663 3.437 - 3.450: 76.0137% ( 820) 00:17:04.663 3.450 - 3.464: 79.4634% ( 576) 00:17:04.663 3.464 - 3.478: 82.4459% ( 498) 00:17:04.663 3.478 - 3.492: 84.5601% ( 353) 00:17:04.663 3.492 - 3.506: 85.9076% ( 225) 00:17:04.663 3.506 - 3.520: 86.8539% ( 158) 00:17:04.663 3.520 - 3.534: 87.4169% ( 94) 00:17:04.663 3.534 - 3.548: 88.0098% ( 99) 00:17:04.663 3.548 - 3.562: 88.6806% ( 112) 00:17:04.663 3.562 - 3.590: 90.3935% ( 286) 00:17:04.663 3.590 - 3.617: 92.1064% ( 286) 00:17:04.663 3.617 - 3.645: 93.7953% ( 282) 00:17:04.663 3.645 - 3.673: 95.3045% ( 252) 00:17:04.663 3.673 - 3.701: 96.8198% ( 253) 00:17:04.663 3.701 - 3.729: 97.8978% ( 180) 00:17:04.663 3.729 - 3.757: 98.7243% ( 138) 00:17:04.663 3.757 - 3.784: 99.1436% ( 70) 00:17:04.663 3.784 - 3.812: 99.4370% ( 49) 00:17:04.663 3.812 - 3.840: 99.5568% ( 20) 00:17:04.663 3.840 - 3.868: 99.5868% ( 5) 00:17:04.663 3.868 - 3.896: 99.6047% ( 3) 00:17:04.663 3.896 - 3.923: 99.6227% ( 3) 00:17:04.663 3.923 - 3.951: 99.6287% ( 1) 00:17:04.663 5.203 - 5.231: 99.6347% ( 1) 00:17:04.663 5.343 - 5.370: 99.6466% ( 2) 00:17:04.663 5.370 - 5.398: 99.6526% ( 1) 00:17:04.663 5.398 - 5.426: 99.6586% ( 1) 00:17:04.663 5.426 - 5.454: 99.6646% ( 1) 00:17:04.663 5.510 - 5.537: 99.6766% ( 2) 00:17:04.663 5.537 - 5.565: 99.6886% ( 2) 00:17:04.663 5.621 - 5.649: 99.7005% ( 2) 00:17:04.663 5.649 - 5.677: 99.7065% ( 1) 00:17:04.663 5.677 - 5.704: 99.7125% ( 1) 00:17:04.663 5.704 - 5.732: 99.7245% ( 2) 00:17:04.663 5.732 - 5.760: 99.7365% ( 2) 00:17:04.663 5.816 - 5.843: 99.7425% ( 1) 00:17:04.663 5.871 - 5.899: 99.7544% ( 2) 00:17:04.663 5.899 - 5.927: 99.7664% ( 2) 00:17:04.663 5.927 - 5.955: 99.7724% ( 1) 00:17:04.663 5.955 - 5.983: 99.7844% ( 2) 00:17:04.663 5.983 - 6.010: 99.7904% ( 1) 00:17:04.663 6.038 - 6.066: 99.7964% ( 1) 00:17:04.663 6.094 - 6.122: 99.8024% ( 1) 00:17:04.663 6.122 - 6.150: 99.8083% ( 1) 00:17:04.663 6.150 - 6.177: 99.8203% ( 2) 00:17:04.663 6.261 - 6.289: 99.8263% ( 1) 00:17:04.663 6.317 - 6.344: 99.8383% ( 2) 00:17:04.663 6.344 - 6.372: 99.8443% ( 1) 00:17:04.663 6.400 - 6.428: 99.8503% ( 1) 00:17:04.663 6.539 - 6.567: 99.8563% ( 1) 00:17:04.663 6.567 - 6.595: 99.8623% ( 1) 00:17:04.663 6.734 - 6.762: 99.8682% ( 1) 00:17:04.663 7.235 - 7.290: 99.8742% ( 1) 00:17:04.663 7.402 - 7.457: 99.8802% ( 1) 00:17:04.663 7.457 - 7.513: 99.8862% ( 1) 00:17:04.663 7.791 - 7.847: 99.8922% ( 1) 00:17:04.663 8.348 - 8.403: 99.8982% ( 1) 00:17:04.663 9.962 - 10.017: 99.9042% ( 1) 00:17:04.663 10.852 - 10.908: 99.9102% ( 1) 00:17:04.663 14.470 - 14.581: 99.9162% ( 1) 00:17:04.663 15.026 - 15.137: 99.9221% ( 1) 00:17:04.663 3989.148 - 4017.642: 100.0000% ( 13) 00:17:04.663 00:17:04.663 Complete histogram 00:17:04.663 ================== 00:17:04.663 Range in us Cumulative Count 00:17:04.663 1.795 - 1.809: 2.5753% ( 430) 00:17:04.663 1.809 - 1.823: 20.4947% ( 2992) 00:17:04.663 1.823 - 1.837: 38.8752% ( 3069) 00:17:04.663 1.837 - 1.850: 46.6192% ( 1293) 00:17:04.663 1.850 - 1.864: 50.9912% ( 730) 00:17:04.663 1.864 - 1.878: 66.4011% ( 2573) 00:17:04.663 1.878 - 1.892: 87.1953% ( 3472) 00:17:04.663 1.892 - 1.906: 95.8316% ( 1442) 00:17:04.663 1.906 - 1.920: 98.4488% ( 437) 00:17:04.663 1.920 - 1.934: 99.0777% ( 105) 00:17:04.663 1.934 - 1.948: 99.1735% ( 16) 00:17:04.663 1.948 - 1.962: 99.1975% ( 4) 00:17:04.663 1.962 - 1.976: 99.2154% ( 3) 00:17:04.663 1.976 - 1.990: 99.2334% ( 3) 00:17:04.663 1.990 - 2.003: 99.2514% ( 3) 00:17:04.663 2.003 - 2.017: 99.2753% ( 4) 00:17:04.663 2.017 - 2.031: 99.2873% ( 2) 00:17:04.663 2.031 - 2.045: 99.2993% ( 2) 00:17:04.663 2.045 - 2.059: 99.3113% ( 2) 00:17:04.663 2.059 - 2.073: 99.3232% ( 2) 00:17:04.663 2.073 - 2.087: 99.3292% ( 1) 00:17:04.663 2.087 - 2.101: 99.3352% ( 1) 00:17:04.663 2.393 - 2.407: 99.3412% ( 1) 00:17:04.663 3.520 - 3.534: 99.3472% ( 1) 00:17:04.663 3.645 - 3.673: 99.3532% ( 1) 00:17:04.663 3.784 - 3.812: 99.3592% ( 1) 00:17:04.663 3.840 - 3.868: 99.3652% ( 1) 00:17:04.663 4.007 - 4.035: 99.3771% ( 2) 00:17:04.663 4.063 - 4.090: 99.3831% ( 1) 00:17:04.663 4.118 - 4.146: 99.3951% ( 2) 00:17:04.663 4.146 - 4.174: 99.4131% ( 3) 00:17:04.663 4.202 - 4.230: 99.4310% ( 3) 00:17:04.663 4.257 - 4.285: 99.4430% ( 2) 00:17:04.663 4.341 - 4.369: 99.4490% ( 1) 00:17:04.663 4.424 - 4.452: 99.4550% ( 1) 00:17:04.663 4.563 - 4.591: 99.4610% ( 1) 00:17:04.663 4.591 - 4.619: 99.4730% ( 2) 00:17:04.663 4.675 - 4.703: 99.4789% ( 1) 00:17:04.663 4.953 - 4.981: 99.4849% ( 1) 00:17:04.663 5.370 - 5.398: 99.4969% ( 2) 00:17:04.663 10.574 - 10.630: 99.5029% ( 1) 00:17:04.663 644.675 - 648.237: 99.5089% ( 1) 00:17:04.663 1182.497 - 1189.621: 99.5149% ( 1) 00:17:04.663 2023.068 - 2037.315: 99.5209% ( 1) 00:17:04.663 2877.885 - 2892.132: 99.5269% ( 1) 00:17:04.663 3989.148 - 4017.642: 99.9880% ( 77) 00:17:04.663 5983.722 - 6012.216: 99.9940% ( 1) 00:17:04.663 6981.009 - 7009.503: 100.0000% ( 1) 00:17:04.663 00:17:04.663 13:47:06 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:04.663 13:47:06 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:04.663 13:47:06 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:04.663 13:47:06 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:04.663 13:47:06 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:04.663 [ 00:17:04.663 { 00:17:04.663 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:04.663 "subtype": "Discovery", 00:17:04.663 "listen_addresses": [], 00:17:04.663 "allow_any_host": true, 00:17:04.663 "hosts": [] 00:17:04.663 }, 00:17:04.663 { 00:17:04.663 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:04.663 "subtype": "NVMe", 00:17:04.663 "listen_addresses": [ 00:17:04.663 { 00:17:04.663 "transport": "VFIOUSER", 00:17:04.663 "trtype": "VFIOUSER", 00:17:04.663 "adrfam": "IPv4", 00:17:04.663 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:04.663 "trsvcid": "0" 00:17:04.663 } 00:17:04.663 ], 00:17:04.663 "allow_any_host": true, 00:17:04.664 "hosts": [], 00:17:04.664 "serial_number": "SPDK1", 00:17:04.664 "model_number": "SPDK bdev Controller", 00:17:04.664 "max_namespaces": 32, 00:17:04.664 "min_cntlid": 1, 00:17:04.664 "max_cntlid": 65519, 00:17:04.664 "namespaces": [ 00:17:04.664 { 00:17:04.664 "nsid": 1, 00:17:04.664 "bdev_name": "Malloc1", 00:17:04.664 "name": "Malloc1", 00:17:04.664 "nguid": "B6CDE80EE96E40CC93536A0C3B69641E", 00:17:04.664 "uuid": "b6cde80e-e96e-40cc-9353-6a0c3b69641e" 00:17:04.664 }, 00:17:04.664 { 00:17:04.664 "nsid": 2, 00:17:04.664 "bdev_name": "Malloc3", 00:17:04.664 "name": "Malloc3", 00:17:04.664 "nguid": "76DE07D3D05E4027BEA7BDD691FD8B44", 00:17:04.664 "uuid": "76de07d3-d05e-4027-bea7-bdd691fd8b44" 00:17:04.664 } 00:17:04.664 ] 00:17:04.664 }, 00:17:04.664 { 00:17:04.664 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:04.664 "subtype": "NVMe", 00:17:04.664 "listen_addresses": [ 00:17:04.664 { 00:17:04.664 "transport": "VFIOUSER", 00:17:04.664 "trtype": "VFIOUSER", 00:17:04.664 "adrfam": "IPv4", 00:17:04.664 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:04.664 "trsvcid": "0" 00:17:04.664 } 00:17:04.664 ], 00:17:04.664 "allow_any_host": true, 00:17:04.664 "hosts": [], 00:17:04.664 "serial_number": "SPDK2", 00:17:04.664 "model_number": "SPDK bdev Controller", 00:17:04.664 "max_namespaces": 32, 00:17:04.664 "min_cntlid": 1, 00:17:04.664 "max_cntlid": 65519, 00:17:04.664 "namespaces": [ 00:17:04.664 { 00:17:04.664 "nsid": 1, 00:17:04.664 "bdev_name": "Malloc2", 00:17:04.664 "name": "Malloc2", 00:17:04.664 "nguid": "F9C4204C85084B328504789AA1B8A85C", 00:17:04.664 "uuid": "f9c4204c-8508-4b32-8504-789aa1b8a85c" 00:17:04.664 } 00:17:04.664 ] 00:17:04.664 } 00:17:04.664 ] 00:17:04.924 13:47:07 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:04.924 13:47:07 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1563755 00:17:04.924 13:47:07 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:04.924 13:47:07 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:04.924 13:47:07 -- common/autotest_common.sh@1244 -- # local i=0 00:17:04.924 13:47:07 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:04.924 13:47:07 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:04.924 13:47:07 -- common/autotest_common.sh@1255 -- # return 0 00:17:04.924 13:47:07 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:04.924 13:47:07 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:04.924 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.924 Malloc4 00:17:04.924 13:47:07 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:05.183 13:47:07 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:05.183 Asynchronous Event Request test 00:17:05.183 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:05.183 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:05.183 Registering asynchronous event callbacks... 00:17:05.183 Starting namespace attribute notice tests for all controllers... 00:17:05.183 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:05.183 aer_cb - Changed Namespace 00:17:05.183 Cleaning up... 00:17:05.442 [ 00:17:05.442 { 00:17:05.442 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:05.442 "subtype": "Discovery", 00:17:05.442 "listen_addresses": [], 00:17:05.442 "allow_any_host": true, 00:17:05.442 "hosts": [] 00:17:05.442 }, 00:17:05.442 { 00:17:05.442 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:05.442 "subtype": "NVMe", 00:17:05.442 "listen_addresses": [ 00:17:05.442 { 00:17:05.442 "transport": "VFIOUSER", 00:17:05.442 "trtype": "VFIOUSER", 00:17:05.442 "adrfam": "IPv4", 00:17:05.442 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:05.442 "trsvcid": "0" 00:17:05.442 } 00:17:05.442 ], 00:17:05.442 "allow_any_host": true, 00:17:05.442 "hosts": [], 00:17:05.442 "serial_number": "SPDK1", 00:17:05.442 "model_number": "SPDK bdev Controller", 00:17:05.442 "max_namespaces": 32, 00:17:05.442 "min_cntlid": 1, 00:17:05.442 "max_cntlid": 65519, 00:17:05.442 "namespaces": [ 00:17:05.443 { 00:17:05.443 "nsid": 1, 00:17:05.443 "bdev_name": "Malloc1", 00:17:05.443 "name": "Malloc1", 00:17:05.443 "nguid": "B6CDE80EE96E40CC93536A0C3B69641E", 00:17:05.443 "uuid": "b6cde80e-e96e-40cc-9353-6a0c3b69641e" 00:17:05.443 }, 00:17:05.443 { 00:17:05.443 "nsid": 2, 00:17:05.443 "bdev_name": "Malloc3", 00:17:05.443 "name": "Malloc3", 00:17:05.443 "nguid": "76DE07D3D05E4027BEA7BDD691FD8B44", 00:17:05.443 "uuid": "76de07d3-d05e-4027-bea7-bdd691fd8b44" 00:17:05.443 } 00:17:05.443 ] 00:17:05.443 }, 00:17:05.443 { 00:17:05.443 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:05.443 "subtype": "NVMe", 00:17:05.443 "listen_addresses": [ 00:17:05.443 { 00:17:05.443 "transport": "VFIOUSER", 00:17:05.443 "trtype": "VFIOUSER", 00:17:05.443 "adrfam": "IPv4", 00:17:05.443 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:05.443 "trsvcid": "0" 00:17:05.443 } 00:17:05.443 ], 00:17:05.443 "allow_any_host": true, 00:17:05.443 "hosts": [], 00:17:05.443 "serial_number": "SPDK2", 00:17:05.443 "model_number": "SPDK bdev Controller", 00:17:05.443 "max_namespaces": 32, 00:17:05.443 "min_cntlid": 1, 00:17:05.443 "max_cntlid": 65519, 00:17:05.443 "namespaces": [ 00:17:05.443 { 00:17:05.443 "nsid": 1, 00:17:05.443 "bdev_name": "Malloc2", 00:17:05.443 "name": "Malloc2", 00:17:05.443 "nguid": "F9C4204C85084B328504789AA1B8A85C", 00:17:05.443 "uuid": "f9c4204c-8508-4b32-8504-789aa1b8a85c" 00:17:05.443 }, 00:17:05.443 { 00:17:05.443 "nsid": 2, 00:17:05.443 "bdev_name": "Malloc4", 00:17:05.443 "name": "Malloc4", 00:17:05.443 "nguid": "1439EB6E9C0B450BB44A1B7DDA6F87C4", 00:17:05.443 "uuid": "1439eb6e-9c0b-450b-b44a-1b7dda6f87c4" 00:17:05.443 } 00:17:05.443 ] 00:17:05.443 } 00:17:05.443 ] 00:17:05.443 13:47:07 -- target/nvmf_vfio_user.sh@44 -- # wait 1563755 00:17:05.443 13:47:07 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:05.443 13:47:07 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1555883 00:17:05.443 13:47:07 -- common/autotest_common.sh@926 -- # '[' -z 1555883 ']' 00:17:05.443 13:47:07 -- common/autotest_common.sh@930 -- # kill -0 1555883 00:17:05.443 13:47:07 -- common/autotest_common.sh@931 -- # uname 00:17:05.443 13:47:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:05.443 13:47:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1555883 00:17:05.443 13:47:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:05.443 13:47:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:05.443 13:47:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1555883' 00:17:05.443 killing process with pid 1555883 00:17:05.443 13:47:07 -- common/autotest_common.sh@945 -- # kill 1555883 00:17:05.443 [2024-07-11 13:47:07.731662] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:05.443 13:47:07 -- common/autotest_common.sh@950 -- # wait 1555883 00:17:05.702 13:47:07 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:05.702 13:47:07 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:05.702 13:47:07 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:05.702 13:47:07 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:05.702 13:47:07 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:05.702 13:47:07 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1563873 00:17:05.702 13:47:07 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1563873' 00:17:05.702 Process pid: 1563873 00:17:05.702 13:47:07 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:05.702 13:47:07 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:05.702 13:47:07 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1563873 00:17:05.702 13:47:07 -- common/autotest_common.sh@819 -- # '[' -z 1563873 ']' 00:17:05.702 13:47:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.702 13:47:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:05.702 13:47:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.702 13:47:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:05.702 13:47:07 -- common/autotest_common.sh@10 -- # set +x 00:17:05.702 [2024-07-11 13:47:08.030642] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:05.702 [2024-07-11 13:47:08.031537] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:05.702 [2024-07-11 13:47:08.031574] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.702 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.702 [2024-07-11 13:47:08.085528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.702 [2024-07-11 13:47:08.120370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:05.702 [2024-07-11 13:47:08.120506] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.702 [2024-07-11 13:47:08.120514] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.702 [2024-07-11 13:47:08.120521] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.702 [2024-07-11 13:47:08.120622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.702 [2024-07-11 13:47:08.120720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.702 [2024-07-11 13:47:08.120782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.702 [2024-07-11 13:47:08.120783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.961 [2024-07-11 13:47:08.190375] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:17:05.961 [2024-07-11 13:47:08.190515] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:17:05.961 [2024-07-11 13:47:08.190676] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:17:05.961 [2024-07-11 13:47:08.191190] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:05.961 [2024-07-11 13:47:08.191279] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:17:06.530 13:47:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:06.530 13:47:08 -- common/autotest_common.sh@852 -- # return 0 00:17:06.530 13:47:08 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:07.470 13:47:09 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:07.729 13:47:10 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:07.729 13:47:10 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:07.729 13:47:10 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:07.729 13:47:10 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:07.729 13:47:10 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:07.987 Malloc1 00:17:07.987 13:47:10 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:07.987 13:47:10 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:08.246 13:47:10 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:08.506 13:47:10 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:08.506 13:47:10 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:08.506 13:47:10 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:08.506 Malloc2 00:17:08.506 13:47:10 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:08.764 13:47:11 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:09.022 13:47:11 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:09.280 13:47:11 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:09.280 13:47:11 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1563873 00:17:09.280 13:47:11 -- common/autotest_common.sh@926 -- # '[' -z 1563873 ']' 00:17:09.280 13:47:11 -- common/autotest_common.sh@930 -- # kill -0 1563873 00:17:09.280 13:47:11 -- common/autotest_common.sh@931 -- # uname 00:17:09.280 13:47:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:09.280 13:47:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1563873 00:17:09.280 13:47:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:09.280 13:47:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:09.280 13:47:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1563873' 00:17:09.280 killing process with pid 1563873 00:17:09.280 13:47:11 -- common/autotest_common.sh@945 -- # kill 1563873 00:17:09.280 13:47:11 -- common/autotest_common.sh@950 -- # wait 1563873 00:17:09.538 13:47:11 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:09.538 13:47:11 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:09.538 00:17:09.538 real 0m51.033s 00:17:09.538 user 3m22.390s 00:17:09.538 sys 0m3.468s 00:17:09.538 13:47:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.538 13:47:11 -- common/autotest_common.sh@10 -- # set +x 00:17:09.538 ************************************ 00:17:09.538 END TEST nvmf_vfio_user 00:17:09.538 ************************************ 00:17:09.538 13:47:11 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:09.538 13:47:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:09.538 13:47:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:09.538 13:47:11 -- common/autotest_common.sh@10 -- # set +x 00:17:09.538 ************************************ 00:17:09.538 START TEST nvmf_vfio_user_nvme_compliance 00:17:09.538 ************************************ 00:17:09.538 13:47:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:09.538 * Looking for test storage... 00:17:09.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:09.539 13:47:11 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.539 13:47:11 -- nvmf/common.sh@7 -- # uname -s 00:17:09.539 13:47:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.539 13:47:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.539 13:47:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.539 13:47:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.539 13:47:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.539 13:47:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.539 13:47:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.539 13:47:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.539 13:47:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.539 13:47:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.539 13:47:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.539 13:47:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.539 13:47:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.539 13:47:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.539 13:47:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.539 13:47:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.539 13:47:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.539 13:47:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.539 13:47:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.539 13:47:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.539 13:47:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.539 13:47:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.539 13:47:11 -- paths/export.sh@5 -- # export PATH 00:17:09.539 13:47:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.539 13:47:11 -- nvmf/common.sh@46 -- # : 0 00:17:09.539 13:47:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:09.539 13:47:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:09.539 13:47:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:09.539 13:47:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.539 13:47:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.539 13:47:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:09.539 13:47:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:09.539 13:47:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:09.539 13:47:11 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.539 13:47:11 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.539 13:47:11 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:09.539 13:47:11 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:09.539 13:47:11 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:09.539 13:47:11 -- compliance/compliance.sh@20 -- # nvmfpid=1564637 00:17:09.539 13:47:11 -- compliance/compliance.sh@21 -- # echo 'Process pid: 1564637' 00:17:09.539 Process pid: 1564637 00:17:09.539 13:47:11 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:09.539 13:47:11 -- compliance/compliance.sh@24 -- # waitforlisten 1564637 00:17:09.539 13:47:11 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:09.539 13:47:11 -- common/autotest_common.sh@819 -- # '[' -z 1564637 ']' 00:17:09.539 13:47:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.539 13:47:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.539 13:47:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.539 13:47:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.539 13:47:11 -- common/autotest_common.sh@10 -- # set +x 00:17:09.539 [2024-07-11 13:47:11.933386] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:09.539 [2024-07-11 13:47:11.933433] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.539 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.539 [2024-07-11 13:47:11.987976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:09.797 [2024-07-11 13:47:12.027192] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:09.797 [2024-07-11 13:47:12.027319] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.797 [2024-07-11 13:47:12.027327] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.797 [2024-07-11 13:47:12.027334] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.797 [2024-07-11 13:47:12.027382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.797 [2024-07-11 13:47:12.027481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.797 [2024-07-11 13:47:12.027483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.363 13:47:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.363 13:47:12 -- common/autotest_common.sh@852 -- # return 0 00:17:10.363 13:47:12 -- compliance/compliance.sh@26 -- # sleep 1 00:17:11.305 13:47:13 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:11.305 13:47:13 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:11.305 13:47:13 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:11.305 13:47:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.305 13:47:13 -- common/autotest_common.sh@10 -- # set +x 00:17:11.305 13:47:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.305 13:47:13 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:11.305 13:47:13 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:11.305 13:47:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.305 13:47:13 -- common/autotest_common.sh@10 -- # set +x 00:17:11.567 malloc0 00:17:11.567 13:47:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.567 13:47:13 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:11.567 13:47:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.567 13:47:13 -- common/autotest_common.sh@10 -- # set +x 00:17:11.567 13:47:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.567 13:47:13 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:11.567 13:47:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.567 13:47:13 -- common/autotest_common.sh@10 -- # set +x 00:17:11.567 13:47:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.567 13:47:13 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:11.567 13:47:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.567 13:47:13 -- common/autotest_common.sh@10 -- # set +x 00:17:11.567 13:47:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.567 13:47:13 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:11.567 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.567 00:17:11.567 00:17:11.567 CUnit - A unit testing framework for C - Version 2.1-3 00:17:11.567 http://cunit.sourceforge.net/ 00:17:11.567 00:17:11.567 00:17:11.567 Suite: nvme_compliance 00:17:11.567 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-11 13:47:13.948449] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:11.567 [2024-07-11 13:47:13.948478] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:11.567 [2024-07-11 13:47:13.948485] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:11.567 passed 00:17:11.825 Test: admin_identify_ctrlr_verify_fused ...passed 00:17:11.825 Test: admin_identify_ns ...[2024-07-11 13:47:14.173179] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:11.825 [2024-07-11 13:47:14.181174] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:11.825 passed 00:17:12.083 Test: admin_get_features_mandatory_features ...passed 00:17:12.083 Test: admin_get_features_optional_features ...passed 00:17:12.342 Test: admin_set_features_number_of_queues ...passed 00:17:12.342 Test: admin_get_log_page_mandatory_logs ...passed 00:17:12.342 Test: admin_get_log_page_with_lpo ...[2024-07-11 13:47:14.779171] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:12.601 passed 00:17:12.601 Test: fabric_property_get ...passed 00:17:12.601 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-11 13:47:14.950762] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:12.601 passed 00:17:12.859 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-11 13:47:15.119168] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:12.859 [2024-07-11 13:47:15.135166] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:12.859 passed 00:17:12.859 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-11 13:47:15.215695] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:12.859 passed 00:17:13.118 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-11 13:47:15.372174] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:13.118 [2024-07-11 13:47:15.396168] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:13.118 passed 00:17:13.118 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-11 13:47:15.483384] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:13.118 [2024-07-11 13:47:15.483414] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:13.118 passed 00:17:13.377 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-11 13:47:15.660168] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:13.377 [2024-07-11 13:47:15.668176] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:13.377 [2024-07-11 13:47:15.676169] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:13.377 [2024-07-11 13:47:15.684172] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:13.377 passed 00:17:13.377 Test: admin_create_io_sq_verify_pc ...[2024-07-11 13:47:15.810173] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:13.636 passed 00:17:14.573 Test: admin_create_io_qp_max_qps ...[2024-07-11 13:47:17.002172] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:15.141 passed 00:17:15.141 Test: admin_create_io_sq_shared_cq ...[2024-07-11 13:47:17.593166] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:15.399 passed 00:17:15.399 00:17:15.399 Run Summary: Type Total Ran Passed Failed Inactive 00:17:15.399 suites 1 1 n/a 0 0 00:17:15.399 tests 18 18 18 0 0 00:17:15.399 asserts 360 360 360 0 n/a 00:17:15.399 00:17:15.399 Elapsed time = 1.514 seconds 00:17:15.400 13:47:17 -- compliance/compliance.sh@42 -- # killprocess 1564637 00:17:15.400 13:47:17 -- common/autotest_common.sh@926 -- # '[' -z 1564637 ']' 00:17:15.400 13:47:17 -- common/autotest_common.sh@930 -- # kill -0 1564637 00:17:15.400 13:47:17 -- common/autotest_common.sh@931 -- # uname 00:17:15.400 13:47:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:15.400 13:47:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1564637 00:17:15.400 13:47:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:15.400 13:47:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:15.400 13:47:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1564637' 00:17:15.400 killing process with pid 1564637 00:17:15.400 13:47:17 -- common/autotest_common.sh@945 -- # kill 1564637 00:17:15.400 13:47:17 -- common/autotest_common.sh@950 -- # wait 1564637 00:17:15.659 13:47:17 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:15.659 13:47:17 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:15.659 00:17:15.659 real 0m6.133s 00:17:15.659 user 0m17.665s 00:17:15.659 sys 0m0.433s 00:17:15.659 13:47:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:15.659 13:47:17 -- common/autotest_common.sh@10 -- # set +x 00:17:15.659 ************************************ 00:17:15.659 END TEST nvmf_vfio_user_nvme_compliance 00:17:15.659 ************************************ 00:17:15.659 13:47:17 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:15.659 13:47:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:15.659 13:47:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:15.659 13:47:17 -- common/autotest_common.sh@10 -- # set +x 00:17:15.659 ************************************ 00:17:15.659 START TEST nvmf_vfio_user_fuzz 00:17:15.659 ************************************ 00:17:15.659 13:47:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:15.659 * Looking for test storage... 00:17:15.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.659 13:47:18 -- nvmf/common.sh@7 -- # uname -s 00:17:15.659 13:47:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.659 13:47:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.659 13:47:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.659 13:47:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.659 13:47:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.659 13:47:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.659 13:47:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.659 13:47:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.659 13:47:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.659 13:47:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.659 13:47:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.659 13:47:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.659 13:47:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.659 13:47:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.659 13:47:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.659 13:47:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.659 13:47:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.659 13:47:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.659 13:47:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.659 13:47:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.659 13:47:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.659 13:47:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.659 13:47:18 -- paths/export.sh@5 -- # export PATH 00:17:15.659 13:47:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.659 13:47:18 -- nvmf/common.sh@46 -- # : 0 00:17:15.659 13:47:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:15.659 13:47:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:15.659 13:47:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:15.659 13:47:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.659 13:47:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.659 13:47:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:15.659 13:47:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:15.659 13:47:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1565638 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1565638' 00:17:15.659 Process pid: 1565638 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:15.659 13:47:18 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1565638 00:17:15.659 13:47:18 -- common/autotest_common.sh@819 -- # '[' -z 1565638 ']' 00:17:15.659 13:47:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.659 13:47:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:15.659 13:47:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.659 13:47:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:15.659 13:47:18 -- common/autotest_common.sh@10 -- # set +x 00:17:16.596 13:47:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:16.596 13:47:18 -- common/autotest_common.sh@852 -- # return 0 00:17:16.596 13:47:18 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:17.553 13:47:19 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:17.553 13:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:17.553 13:47:19 -- common/autotest_common.sh@10 -- # set +x 00:17:17.553 13:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:17.553 13:47:19 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:17.553 13:47:19 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:17.553 13:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:17.553 13:47:19 -- common/autotest_common.sh@10 -- # set +x 00:17:17.553 malloc0 00:17:17.553 13:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:17.553 13:47:19 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:17.553 13:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:17.553 13:47:19 -- common/autotest_common.sh@10 -- # set +x 00:17:17.553 13:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:17.553 13:47:19 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:17.553 13:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:17.553 13:47:19 -- common/autotest_common.sh@10 -- # set +x 00:17:17.553 13:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:17.553 13:47:19 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:17.553 13:47:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:17.553 13:47:19 -- common/autotest_common.sh@10 -- # set +x 00:17:17.553 13:47:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:17.553 13:47:19 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:17.553 13:47:19 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:49.643 Fuzzing completed. Shutting down the fuzz application 00:17:49.643 00:17:49.643 Dumping successful admin opcodes: 00:17:49.643 8, 9, 10, 24, 00:17:49.643 Dumping successful io opcodes: 00:17:49.643 0, 00:17:49.643 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1001511, total successful commands: 3921, random_seed: 486184448 00:17:49.643 NS: 0x200003a1ef00 admin qp, Total commands completed: 250350, total successful commands: 2025, random_seed: 3243942784 00:17:49.643 13:47:50 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:49.643 13:47:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.643 13:47:50 -- common/autotest_common.sh@10 -- # set +x 00:17:49.643 13:47:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.643 13:47:50 -- target/vfio_user_fuzz.sh@46 -- # killprocess 1565638 00:17:49.643 13:47:50 -- common/autotest_common.sh@926 -- # '[' -z 1565638 ']' 00:17:49.643 13:47:50 -- common/autotest_common.sh@930 -- # kill -0 1565638 00:17:49.643 13:47:50 -- common/autotest_common.sh@931 -- # uname 00:17:49.643 13:47:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:49.643 13:47:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1565638 00:17:49.643 13:47:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:49.643 13:47:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:49.643 13:47:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1565638' 00:17:49.643 killing process with pid 1565638 00:17:49.643 13:47:50 -- common/autotest_common.sh@945 -- # kill 1565638 00:17:49.643 13:47:50 -- common/autotest_common.sh@950 -- # wait 1565638 00:17:49.643 13:47:50 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:49.643 13:47:50 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:49.643 00:17:49.643 real 0m32.652s 00:17:49.643 user 0m33.892s 00:17:49.643 sys 0m27.573s 00:17:49.643 13:47:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.643 13:47:50 -- common/autotest_common.sh@10 -- # set +x 00:17:49.643 ************************************ 00:17:49.643 END TEST nvmf_vfio_user_fuzz 00:17:49.643 ************************************ 00:17:49.643 13:47:50 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:49.643 13:47:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:49.643 13:47:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:49.643 13:47:50 -- common/autotest_common.sh@10 -- # set +x 00:17:49.643 ************************************ 00:17:49.643 START TEST nvmf_host_management 00:17:49.643 ************************************ 00:17:49.643 13:47:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:49.643 * Looking for test storage... 00:17:49.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.643 13:47:50 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.643 13:47:50 -- nvmf/common.sh@7 -- # uname -s 00:17:49.643 13:47:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.643 13:47:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.643 13:47:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.643 13:47:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.643 13:47:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.643 13:47:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.643 13:47:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.643 13:47:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.643 13:47:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.643 13:47:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.643 13:47:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.643 13:47:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.643 13:47:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.643 13:47:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.643 13:47:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.643 13:47:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.643 13:47:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.643 13:47:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.643 13:47:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.644 13:47:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.644 13:47:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.644 13:47:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.644 13:47:50 -- paths/export.sh@5 -- # export PATH 00:17:49.644 13:47:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.644 13:47:50 -- nvmf/common.sh@46 -- # : 0 00:17:49.644 13:47:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:49.644 13:47:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:49.644 13:47:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:49.644 13:47:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.644 13:47:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.644 13:47:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:49.644 13:47:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:49.644 13:47:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:49.644 13:47:50 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:49.644 13:47:50 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:49.644 13:47:50 -- target/host_management.sh@104 -- # nvmftestinit 00:17:49.644 13:47:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:49.644 13:47:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.644 13:47:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:49.644 13:47:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:49.644 13:47:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:49.644 13:47:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.644 13:47:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.644 13:47:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.644 13:47:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:49.644 13:47:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:49.644 13:47:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:49.644 13:47:50 -- common/autotest_common.sh@10 -- # set +x 00:17:52.934 13:47:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:52.934 13:47:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:52.934 13:47:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:52.934 13:47:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:52.934 13:47:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:52.934 13:47:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:52.934 13:47:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:52.934 13:47:55 -- nvmf/common.sh@294 -- # net_devs=() 00:17:52.934 13:47:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:52.934 13:47:55 -- nvmf/common.sh@295 -- # e810=() 00:17:52.934 13:47:55 -- nvmf/common.sh@295 -- # local -ga e810 00:17:52.934 13:47:55 -- nvmf/common.sh@296 -- # x722=() 00:17:52.934 13:47:55 -- nvmf/common.sh@296 -- # local -ga x722 00:17:52.934 13:47:55 -- nvmf/common.sh@297 -- # mlx=() 00:17:52.934 13:47:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:52.934 13:47:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:52.934 13:47:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:52.934 13:47:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:52.934 13:47:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:52.934 13:47:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:52.934 13:47:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:52.934 13:47:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:52.934 13:47:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:52.934 13:47:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:52.934 13:47:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:52.934 13:47:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:52.934 13:47:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:52.934 13:47:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:52.934 13:47:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:52.934 13:47:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:52.934 13:47:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:52.934 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:52.934 13:47:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:52.934 13:47:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:52.934 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:52.934 13:47:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:52.934 13:47:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:52.934 13:47:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.934 13:47:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:52.934 13:47:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.934 13:47:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:52.934 Found net devices under 0000:86:00.0: cvl_0_0 00:17:52.934 13:47:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.934 13:47:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:52.934 13:47:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.934 13:47:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:52.934 13:47:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.934 13:47:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:52.934 Found net devices under 0000:86:00.1: cvl_0_1 00:17:52.934 13:47:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.934 13:47:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:52.934 13:47:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:52.934 13:47:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:52.934 13:47:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:52.934 13:47:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.934 13:47:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:52.934 13:47:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:52.934 13:47:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:52.934 13:47:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:52.934 13:47:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:52.934 13:47:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:52.934 13:47:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:52.934 13:47:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.934 13:47:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:52.934 13:47:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:52.934 13:47:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:52.934 13:47:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.194 13:47:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.194 13:47:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.194 13:47:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:53.194 13:47:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.194 13:47:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.194 13:47:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.194 13:47:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:53.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:17:53.194 00:17:53.194 --- 10.0.0.2 ping statistics --- 00:17:53.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.194 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:17:53.194 13:47:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:17:53.194 00:17:53.194 --- 10.0.0.1 ping statistics --- 00:17:53.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.194 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:17:53.194 13:47:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.194 13:47:55 -- nvmf/common.sh@410 -- # return 0 00:17:53.194 13:47:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:53.194 13:47:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.194 13:47:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:53.194 13:47:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:53.194 13:47:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.194 13:47:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:53.194 13:47:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:53.194 13:47:55 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:53.194 13:47:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:53.194 13:47:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:53.194 13:47:55 -- common/autotest_common.sh@10 -- # set +x 00:17:53.194 ************************************ 00:17:53.194 START TEST nvmf_host_management 00:17:53.194 ************************************ 00:17:53.194 13:47:55 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:17:53.194 13:47:55 -- target/host_management.sh@69 -- # starttarget 00:17:53.194 13:47:55 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:53.194 13:47:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:53.194 13:47:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:53.194 13:47:55 -- common/autotest_common.sh@10 -- # set +x 00:17:53.194 13:47:55 -- nvmf/common.sh@469 -- # nvmfpid=1574064 00:17:53.194 13:47:55 -- nvmf/common.sh@470 -- # waitforlisten 1574064 00:17:53.194 13:47:55 -- common/autotest_common.sh@819 -- # '[' -z 1574064 ']' 00:17:53.194 13:47:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.194 13:47:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:53.194 13:47:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.194 13:47:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:53.194 13:47:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:53.194 13:47:55 -- common/autotest_common.sh@10 -- # set +x 00:17:53.453 [2024-07-11 13:47:55.654962] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:53.453 [2024-07-11 13:47:55.655004] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.453 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.453 [2024-07-11 13:47:55.713298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.453 [2024-07-11 13:47:55.751980] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:53.453 [2024-07-11 13:47:55.752089] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.453 [2024-07-11 13:47:55.752097] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.453 [2024-07-11 13:47:55.752106] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.453 [2024-07-11 13:47:55.752220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.453 [2024-07-11 13:47:55.752243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.453 [2024-07-11 13:47:55.752354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.453 [2024-07-11 13:47:55.752355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:54.020 13:47:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:54.020 13:47:56 -- common/autotest_common.sh@852 -- # return 0 00:17:54.020 13:47:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:54.020 13:47:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:54.020 13:47:56 -- common/autotest_common.sh@10 -- # set +x 00:17:54.278 13:47:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.278 13:47:56 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:54.278 13:47:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:54.278 13:47:56 -- common/autotest_common.sh@10 -- # set +x 00:17:54.278 [2024-07-11 13:47:56.492542] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.278 13:47:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.278 13:47:56 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:54.278 13:47:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:54.278 13:47:56 -- common/autotest_common.sh@10 -- # set +x 00:17:54.278 13:47:56 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:54.278 13:47:56 -- target/host_management.sh@23 -- # cat 00:17:54.278 13:47:56 -- target/host_management.sh@30 -- # rpc_cmd 00:17:54.278 13:47:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:54.278 13:47:56 -- common/autotest_common.sh@10 -- # set +x 00:17:54.278 Malloc0 00:17:54.278 [2024-07-11 13:47:56.552434] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.278 13:47:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.278 13:47:56 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:54.278 13:47:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:54.278 13:47:56 -- common/autotest_common.sh@10 -- # set +x 00:17:54.278 13:47:56 -- target/host_management.sh@73 -- # perfpid=1574304 00:17:54.278 13:47:56 -- target/host_management.sh@74 -- # waitforlisten 1574304 /var/tmp/bdevperf.sock 00:17:54.278 13:47:56 -- common/autotest_common.sh@819 -- # '[' -z 1574304 ']' 00:17:54.278 13:47:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.278 13:47:56 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:54.278 13:47:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:54.278 13:47:56 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:54.278 13:47:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.278 13:47:56 -- nvmf/common.sh@520 -- # config=() 00:17:54.278 13:47:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:54.278 13:47:56 -- nvmf/common.sh@520 -- # local subsystem config 00:17:54.278 13:47:56 -- common/autotest_common.sh@10 -- # set +x 00:17:54.278 13:47:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:54.278 13:47:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:54.278 { 00:17:54.278 "params": { 00:17:54.278 "name": "Nvme$subsystem", 00:17:54.278 "trtype": "$TEST_TRANSPORT", 00:17:54.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.278 "adrfam": "ipv4", 00:17:54.278 "trsvcid": "$NVMF_PORT", 00:17:54.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.278 "hdgst": ${hdgst:-false}, 00:17:54.278 "ddgst": ${ddgst:-false} 00:17:54.278 }, 00:17:54.278 "method": "bdev_nvme_attach_controller" 00:17:54.278 } 00:17:54.278 EOF 00:17:54.278 )") 00:17:54.278 13:47:56 -- nvmf/common.sh@542 -- # cat 00:17:54.278 13:47:56 -- nvmf/common.sh@544 -- # jq . 00:17:54.278 13:47:56 -- nvmf/common.sh@545 -- # IFS=, 00:17:54.278 13:47:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:54.278 "params": { 00:17:54.278 "name": "Nvme0", 00:17:54.278 "trtype": "tcp", 00:17:54.278 "traddr": "10.0.0.2", 00:17:54.278 "adrfam": "ipv4", 00:17:54.278 "trsvcid": "4420", 00:17:54.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:54.278 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:54.278 "hdgst": false, 00:17:54.278 "ddgst": false 00:17:54.278 }, 00:17:54.278 "method": "bdev_nvme_attach_controller" 00:17:54.278 }' 00:17:54.278 [2024-07-11 13:47:56.638897] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:54.278 [2024-07-11 13:47:56.638941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574304 ] 00:17:54.278 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.278 [2024-07-11 13:47:56.693784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.278 [2024-07-11 13:47:56.731320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.537 Running I/O for 10 seconds... 00:17:55.104 13:47:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:55.104 13:47:57 -- common/autotest_common.sh@852 -- # return 0 00:17:55.104 13:47:57 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:55.104 13:47:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:55.104 13:47:57 -- common/autotest_common.sh@10 -- # set +x 00:17:55.104 13:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:55.104 13:47:57 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:55.104 13:47:57 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:55.104 13:47:57 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:55.104 13:47:57 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:55.104 13:47:57 -- target/host_management.sh@52 -- # local ret=1 00:17:55.104 13:47:57 -- target/host_management.sh@53 -- # local i 00:17:55.104 13:47:57 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:55.104 13:47:57 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:55.104 13:47:57 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:55.104 13:47:57 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:55.104 13:47:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:55.104 13:47:57 -- common/autotest_common.sh@10 -- # set +x 00:17:55.104 13:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:55.104 13:47:57 -- target/host_management.sh@55 -- # read_io_count=1717 00:17:55.104 13:47:57 -- target/host_management.sh@58 -- # '[' 1717 -ge 100 ']' 00:17:55.104 13:47:57 -- target/host_management.sh@59 -- # ret=0 00:17:55.104 13:47:57 -- target/host_management.sh@60 -- # break 00:17:55.104 13:47:57 -- target/host_management.sh@64 -- # return 0 00:17:55.104 13:47:57 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:55.104 13:47:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:55.104 13:47:57 -- common/autotest_common.sh@10 -- # set +x 00:17:55.104 [2024-07-11 13:47:57.511835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.511998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.512004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.512010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.512015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.512021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.512027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.512033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.512039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1748960 is same with the state(5) to be set 00:17:55.104 [2024-07-11 13:47:57.513801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.104 [2024-07-11 13:47:57.513835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.104 [2024-07-11 13:47:57.513850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.104 [2024-07-11 13:47:57.513858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.104 [2024-07-11 13:47:57.513867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.104 [2024-07-11 13:47:57.513875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.104 [2024-07-11 13:47:57.513884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.104 [2024-07-11 13:47:57.513896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.104 [2024-07-11 13:47:57.513906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.104 [2024-07-11 13:47:57.513914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.104 [2024-07-11 13:47:57.513922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.104 [2024-07-11 13:47:57.513931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.104 [2024-07-11 13:47:57.513941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.104 [2024-07-11 13:47:57.513950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.104 [2024-07-11 13:47:57.513959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.104 [2024-07-11 13:47:57.513967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.104 [2024-07-11 13:47:57.513978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.104 [2024-07-11 13:47:57.513987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.104 [2024-07-11 13:47:57.513996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.104 [2024-07-11 13:47:57.514005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.104 [2024-07-11 13:47:57.514014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.104 [2024-07-11 13:47:57.514022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.104 [2024-07-11 13:47:57.514031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.514949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.105 [2024-07-11 13:47:57.514957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.105 [2024-07-11 13:47:57.515025] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x85fad0 was disconnected and freed. reset controller. 00:17:55.105 [2024-07-11 13:47:57.515939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:55.105 13:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:55.105 13:47:57 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:55.105 13:47:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:55.105 task offset: 107136 on job bdev=Nvme0n1 fails 00:17:55.105 00:17:55.105 Latency(us) 00:17:55.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.105 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:55.105 Job: Nvme0n1 ended in about 0.55 seconds with error 00:17:55.105 Verification LBA range: start 0x0 length 0x400 00:17:55.105 Nvme0n1 : 0.55 3317.49 207.34 115.58 0.00 18393.50 1495.93 23137.06 00:17:55.105 =================================================================================================================== 00:17:55.105 Total : 3317.49 207.34 115.58 0.00 18393.50 1495.93 23137.06 00:17:55.105 [2024-07-11 13:47:57.517526] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:55.105 [2024-07-11 13:47:57.517543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x861d90 (9): Bad file descriptor 00:17:55.105 13:47:57 -- common/autotest_common.sh@10 -- # set +x 00:17:55.105 13:47:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:55.105 13:47:57 -- target/host_management.sh@87 -- # sleep 1 00:17:55.363 [2024-07-11 13:47:57.569554] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:56.300 13:47:58 -- target/host_management.sh@91 -- # kill -9 1574304 00:17:56.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1574304) - No such process 00:17:56.300 13:47:58 -- target/host_management.sh@91 -- # true 00:17:56.300 13:47:58 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:56.300 13:47:58 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:56.300 13:47:58 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:56.300 13:47:58 -- nvmf/common.sh@520 -- # config=() 00:17:56.300 13:47:58 -- nvmf/common.sh@520 -- # local subsystem config 00:17:56.300 13:47:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:56.300 13:47:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:56.300 { 00:17:56.300 "params": { 00:17:56.300 "name": "Nvme$subsystem", 00:17:56.300 "trtype": "$TEST_TRANSPORT", 00:17:56.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:56.300 "adrfam": "ipv4", 00:17:56.300 "trsvcid": "$NVMF_PORT", 00:17:56.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:56.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:56.300 "hdgst": ${hdgst:-false}, 00:17:56.300 "ddgst": ${ddgst:-false} 00:17:56.300 }, 00:17:56.300 "method": "bdev_nvme_attach_controller" 00:17:56.300 } 00:17:56.300 EOF 00:17:56.300 )") 00:17:56.300 13:47:58 -- nvmf/common.sh@542 -- # cat 00:17:56.300 13:47:58 -- nvmf/common.sh@544 -- # jq . 00:17:56.300 13:47:58 -- nvmf/common.sh@545 -- # IFS=, 00:17:56.300 13:47:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:56.300 "params": { 00:17:56.300 "name": "Nvme0", 00:17:56.300 "trtype": "tcp", 00:17:56.300 "traddr": "10.0.0.2", 00:17:56.300 "adrfam": "ipv4", 00:17:56.300 "trsvcid": "4420", 00:17:56.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:56.300 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:56.300 "hdgst": false, 00:17:56.300 "ddgst": false 00:17:56.300 }, 00:17:56.300 "method": "bdev_nvme_attach_controller" 00:17:56.300 }' 00:17:56.300 [2024-07-11 13:47:58.578811] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:56.300 [2024-07-11 13:47:58.578857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574670 ] 00:17:56.300 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.300 [2024-07-11 13:47:58.634828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.300 [2024-07-11 13:47:58.669631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.559 Running I/O for 1 seconds... 00:17:57.495 00:17:57.495 Latency(us) 00:17:57.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.495 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:57.495 Verification LBA range: start 0x0 length 0x400 00:17:57.495 Nvme0n1 : 1.01 3907.73 244.23 0.00 0.00 16141.64 1061.40 27468.13 00:17:57.495 =================================================================================================================== 00:17:57.495 Total : 3907.73 244.23 0.00 0.00 16141.64 1061.40 27468.13 00:17:57.754 13:48:00 -- target/host_management.sh@101 -- # stoptarget 00:17:57.754 13:48:00 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:57.754 13:48:00 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:57.754 13:48:00 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:57.754 13:48:00 -- target/host_management.sh@40 -- # nvmftestfini 00:17:57.754 13:48:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:57.754 13:48:00 -- nvmf/common.sh@116 -- # sync 00:17:57.754 13:48:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:57.754 13:48:00 -- nvmf/common.sh@119 -- # set +e 00:17:57.754 13:48:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:57.754 13:48:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:57.754 rmmod nvme_tcp 00:17:57.754 rmmod nvme_fabrics 00:17:57.754 rmmod nvme_keyring 00:17:57.754 13:48:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:57.754 13:48:00 -- nvmf/common.sh@123 -- # set -e 00:17:57.754 13:48:00 -- nvmf/common.sh@124 -- # return 0 00:17:57.754 13:48:00 -- nvmf/common.sh@477 -- # '[' -n 1574064 ']' 00:17:57.755 13:48:00 -- nvmf/common.sh@478 -- # killprocess 1574064 00:17:57.755 13:48:00 -- common/autotest_common.sh@926 -- # '[' -z 1574064 ']' 00:17:57.755 13:48:00 -- common/autotest_common.sh@930 -- # kill -0 1574064 00:17:57.755 13:48:00 -- common/autotest_common.sh@931 -- # uname 00:17:57.755 13:48:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:57.755 13:48:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1574064 00:17:57.755 13:48:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:57.755 13:48:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:57.755 13:48:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1574064' 00:17:57.755 killing process with pid 1574064 00:17:57.755 13:48:00 -- common/autotest_common.sh@945 -- # kill 1574064 00:17:57.755 13:48:00 -- common/autotest_common.sh@950 -- # wait 1574064 00:17:58.013 [2024-07-11 13:48:00.371323] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:58.013 13:48:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:58.013 13:48:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:58.013 13:48:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:58.013 13:48:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:58.013 13:48:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:58.013 13:48:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.013 13:48:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.013 13:48:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.546 13:48:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:00.546 00:18:00.546 real 0m6.856s 00:18:00.546 user 0m20.812s 00:18:00.546 sys 0m1.239s 00:18:00.546 13:48:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.546 13:48:02 -- common/autotest_common.sh@10 -- # set +x 00:18:00.546 ************************************ 00:18:00.546 END TEST nvmf_host_management 00:18:00.546 ************************************ 00:18:00.546 13:48:02 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:00.546 00:18:00.546 real 0m11.861s 00:18:00.546 user 0m22.094s 00:18:00.546 sys 0m4.918s 00:18:00.546 13:48:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.546 13:48:02 -- common/autotest_common.sh@10 -- # set +x 00:18:00.546 ************************************ 00:18:00.546 END TEST nvmf_host_management 00:18:00.546 ************************************ 00:18:00.546 13:48:02 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:00.546 13:48:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:00.546 13:48:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:00.546 13:48:02 -- common/autotest_common.sh@10 -- # set +x 00:18:00.546 ************************************ 00:18:00.546 START TEST nvmf_lvol 00:18:00.546 ************************************ 00:18:00.546 13:48:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:00.546 * Looking for test storage... 00:18:00.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.546 13:48:02 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.546 13:48:02 -- nvmf/common.sh@7 -- # uname -s 00:18:00.546 13:48:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.546 13:48:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.546 13:48:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.546 13:48:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.546 13:48:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.546 13:48:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.546 13:48:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.546 13:48:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.546 13:48:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.546 13:48:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.546 13:48:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.546 13:48:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.546 13:48:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.546 13:48:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.546 13:48:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.546 13:48:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.546 13:48:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.546 13:48:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.546 13:48:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.546 13:48:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.546 13:48:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.546 13:48:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.546 13:48:02 -- paths/export.sh@5 -- # export PATH 00:18:00.546 13:48:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.546 13:48:02 -- nvmf/common.sh@46 -- # : 0 00:18:00.546 13:48:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:00.546 13:48:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:00.546 13:48:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:00.546 13:48:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.546 13:48:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.546 13:48:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:00.546 13:48:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:00.546 13:48:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:00.546 13:48:02 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:00.546 13:48:02 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:00.546 13:48:02 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:18:00.546 13:48:02 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:18:00.546 13:48:02 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:00.546 13:48:02 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:18:00.546 13:48:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:00.546 13:48:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.546 13:48:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:00.546 13:48:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:00.546 13:48:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:00.546 13:48:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.546 13:48:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.546 13:48:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.546 13:48:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:00.546 13:48:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:00.546 13:48:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:00.546 13:48:02 -- common/autotest_common.sh@10 -- # set +x 00:18:05.848 13:48:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:05.848 13:48:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:05.848 13:48:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:05.848 13:48:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:05.848 13:48:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:05.848 13:48:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:05.848 13:48:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:05.848 13:48:07 -- nvmf/common.sh@294 -- # net_devs=() 00:18:05.848 13:48:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:05.848 13:48:07 -- nvmf/common.sh@295 -- # e810=() 00:18:05.848 13:48:07 -- nvmf/common.sh@295 -- # local -ga e810 00:18:05.848 13:48:07 -- nvmf/common.sh@296 -- # x722=() 00:18:05.848 13:48:07 -- nvmf/common.sh@296 -- # local -ga x722 00:18:05.848 13:48:07 -- nvmf/common.sh@297 -- # mlx=() 00:18:05.848 13:48:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:05.848 13:48:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.848 13:48:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.848 13:48:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.848 13:48:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.848 13:48:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.848 13:48:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.848 13:48:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.848 13:48:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.848 13:48:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.848 13:48:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.848 13:48:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.848 13:48:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:05.848 13:48:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:05.848 13:48:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:05.848 13:48:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:05.848 13:48:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:05.848 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:05.848 13:48:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:05.848 13:48:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:05.848 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:05.848 13:48:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:05.848 13:48:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:05.848 13:48:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:05.848 13:48:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.849 13:48:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:05.849 13:48:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.849 13:48:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:05.849 Found net devices under 0000:86:00.0: cvl_0_0 00:18:05.849 13:48:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.849 13:48:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:05.849 13:48:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.849 13:48:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:05.849 13:48:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.849 13:48:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:05.849 Found net devices under 0000:86:00.1: cvl_0_1 00:18:05.849 13:48:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.849 13:48:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:05.849 13:48:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:05.849 13:48:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:05.849 13:48:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:05.849 13:48:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:05.849 13:48:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.849 13:48:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.849 13:48:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.849 13:48:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:05.849 13:48:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.849 13:48:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.849 13:48:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:05.849 13:48:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.849 13:48:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.849 13:48:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:05.849 13:48:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:05.849 13:48:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.849 13:48:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.849 13:48:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.849 13:48:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.849 13:48:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:05.849 13:48:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.849 13:48:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.849 13:48:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.849 13:48:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:05.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:18:05.849 00:18:05.849 --- 10.0.0.2 ping statistics --- 00:18:05.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.849 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:18:05.849 13:48:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:18:05.849 00:18:05.849 --- 10.0.0.1 ping statistics --- 00:18:05.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.849 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:18:05.849 13:48:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.849 13:48:07 -- nvmf/common.sh@410 -- # return 0 00:18:05.849 13:48:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:05.849 13:48:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.849 13:48:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:05.849 13:48:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:05.849 13:48:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.849 13:48:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:05.849 13:48:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:05.849 13:48:07 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:05.849 13:48:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:05.849 13:48:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:05.849 13:48:07 -- common/autotest_common.sh@10 -- # set +x 00:18:05.849 13:48:07 -- nvmf/common.sh@469 -- # nvmfpid=1578470 00:18:05.849 13:48:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:05.849 13:48:07 -- nvmf/common.sh@470 -- # waitforlisten 1578470 00:18:05.849 13:48:07 -- common/autotest_common.sh@819 -- # '[' -z 1578470 ']' 00:18:05.849 13:48:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.849 13:48:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:05.849 13:48:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.849 13:48:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:05.849 13:48:07 -- common/autotest_common.sh@10 -- # set +x 00:18:05.849 [2024-07-11 13:48:07.927291] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:05.849 [2024-07-11 13:48:07.927334] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.849 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.849 [2024-07-11 13:48:07.983339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:05.849 [2024-07-11 13:48:08.022587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:05.849 [2024-07-11 13:48:08.022698] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.849 [2024-07-11 13:48:08.022706] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.849 [2024-07-11 13:48:08.022713] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.849 [2024-07-11 13:48:08.022759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.849 [2024-07-11 13:48:08.022856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.849 [2024-07-11 13:48:08.022866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.415 13:48:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:06.415 13:48:08 -- common/autotest_common.sh@852 -- # return 0 00:18:06.415 13:48:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:06.415 13:48:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:06.415 13:48:08 -- common/autotest_common.sh@10 -- # set +x 00:18:06.415 13:48:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.415 13:48:08 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:06.673 [2024-07-11 13:48:08.930556] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.673 13:48:08 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:06.933 13:48:09 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:06.933 13:48:09 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:06.933 13:48:09 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:06.933 13:48:09 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:07.192 13:48:09 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:07.451 13:48:09 -- target/nvmf_lvol.sh@29 -- # lvs=4e38c330-0f58-48f4-8fdf-143bedefe329 00:18:07.451 13:48:09 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4e38c330-0f58-48f4-8fdf-143bedefe329 lvol 20 00:18:07.451 13:48:09 -- target/nvmf_lvol.sh@32 -- # lvol=a71b873a-cdba-4eb0-b9f3-733ca8070ce4 00:18:07.451 13:48:09 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:07.710 13:48:10 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a71b873a-cdba-4eb0-b9f3-733ca8070ce4 00:18:07.969 13:48:10 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:07.969 [2024-07-11 13:48:10.387325] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.969 13:48:10 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:08.228 13:48:10 -- target/nvmf_lvol.sh@42 -- # perf_pid=1579359 00:18:08.228 13:48:10 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:08.228 13:48:10 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:08.228 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.163 13:48:11 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a71b873a-cdba-4eb0-b9f3-733ca8070ce4 MY_SNAPSHOT 00:18:09.422 13:48:11 -- target/nvmf_lvol.sh@47 -- # snapshot=7ebef631-ac78-4f5c-bbb6-700ca027be0e 00:18:09.422 13:48:11 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a71b873a-cdba-4eb0-b9f3-733ca8070ce4 30 00:18:09.681 13:48:12 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7ebef631-ac78-4f5c-bbb6-700ca027be0e MY_CLONE 00:18:09.941 13:48:12 -- target/nvmf_lvol.sh@49 -- # clone=246cbbe1-5b21-4574-95af-b858aeeb1fcb 00:18:09.941 13:48:12 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 246cbbe1-5b21-4574-95af-b858aeeb1fcb 00:18:10.200 13:48:12 -- target/nvmf_lvol.sh@53 -- # wait 1579359 00:18:20.180 Initializing NVMe Controllers 00:18:20.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:20.180 Controller IO queue size 128, less than required. 00:18:20.180 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:20.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:20.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:20.180 Initialization complete. Launching workers. 00:18:20.180 ======================================================== 00:18:20.180 Latency(us) 00:18:20.180 Device Information : IOPS MiB/s Average min max 00:18:20.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12104.60 47.28 10582.43 1714.29 72621.78 00:18:20.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11973.10 46.77 10697.95 3604.71 47584.86 00:18:20.180 ======================================================== 00:18:20.180 Total : 24077.70 94.05 10639.88 1714.29 72621.78 00:18:20.180 00:18:20.180 13:48:21 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:20.180 13:48:21 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a71b873a-cdba-4eb0-b9f3-733ca8070ce4 00:18:20.180 13:48:21 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4e38c330-0f58-48f4-8fdf-143bedefe329 00:18:20.180 13:48:21 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:20.180 13:48:21 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:20.180 13:48:21 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:20.180 13:48:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:20.180 13:48:21 -- nvmf/common.sh@116 -- # sync 00:18:20.180 13:48:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:20.180 13:48:21 -- nvmf/common.sh@119 -- # set +e 00:18:20.180 13:48:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:20.180 13:48:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:20.180 rmmod nvme_tcp 00:18:20.180 rmmod nvme_fabrics 00:18:20.180 rmmod nvme_keyring 00:18:20.180 13:48:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:20.180 13:48:21 -- nvmf/common.sh@123 -- # set -e 00:18:20.180 13:48:21 -- nvmf/common.sh@124 -- # return 0 00:18:20.180 13:48:21 -- nvmf/common.sh@477 -- # '[' -n 1578470 ']' 00:18:20.180 13:48:21 -- nvmf/common.sh@478 -- # killprocess 1578470 00:18:20.180 13:48:21 -- common/autotest_common.sh@926 -- # '[' -z 1578470 ']' 00:18:20.180 13:48:21 -- common/autotest_common.sh@930 -- # kill -0 1578470 00:18:20.180 13:48:21 -- common/autotest_common.sh@931 -- # uname 00:18:20.180 13:48:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:20.180 13:48:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1578470 00:18:20.180 13:48:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:20.180 13:48:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:20.180 13:48:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1578470' 00:18:20.180 killing process with pid 1578470 00:18:20.180 13:48:21 -- common/autotest_common.sh@945 -- # kill 1578470 00:18:20.180 13:48:21 -- common/autotest_common.sh@950 -- # wait 1578470 00:18:20.180 13:48:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:20.180 13:48:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:20.180 13:48:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:20.180 13:48:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:20.180 13:48:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:20.180 13:48:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.180 13:48:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.180 13:48:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.557 13:48:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:21.557 00:18:21.557 real 0m21.480s 00:18:21.557 user 1m4.214s 00:18:21.557 sys 0m6.635s 00:18:21.557 13:48:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:21.557 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:18:21.557 ************************************ 00:18:21.557 END TEST nvmf_lvol 00:18:21.557 ************************************ 00:18:21.815 13:48:24 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:21.815 13:48:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:21.815 13:48:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:21.815 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:18:21.815 ************************************ 00:18:21.815 START TEST nvmf_lvs_grow 00:18:21.815 ************************************ 00:18:21.815 13:48:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:21.815 * Looking for test storage... 00:18:21.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.815 13:48:24 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.815 13:48:24 -- nvmf/common.sh@7 -- # uname -s 00:18:21.815 13:48:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.815 13:48:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.815 13:48:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.815 13:48:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.815 13:48:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.815 13:48:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.815 13:48:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.815 13:48:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.815 13:48:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.815 13:48:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.815 13:48:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.815 13:48:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.815 13:48:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.815 13:48:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.815 13:48:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.815 13:48:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.815 13:48:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.815 13:48:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.815 13:48:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.816 13:48:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.816 13:48:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.816 13:48:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.816 13:48:24 -- paths/export.sh@5 -- # export PATH 00:18:21.816 13:48:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.816 13:48:24 -- nvmf/common.sh@46 -- # : 0 00:18:21.816 13:48:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:21.816 13:48:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:21.816 13:48:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:21.816 13:48:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.816 13:48:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.816 13:48:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:21.816 13:48:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:21.816 13:48:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:21.816 13:48:24 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:21.816 13:48:24 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:21.816 13:48:24 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:21.816 13:48:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:21.816 13:48:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.816 13:48:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:21.816 13:48:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:21.816 13:48:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:21.816 13:48:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.816 13:48:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.816 13:48:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.816 13:48:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:21.816 13:48:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:21.816 13:48:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:21.816 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:18:27.085 13:48:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:27.085 13:48:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:27.085 13:48:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:27.085 13:48:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:27.085 13:48:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:27.085 13:48:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:27.085 13:48:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:27.085 13:48:29 -- nvmf/common.sh@294 -- # net_devs=() 00:18:27.085 13:48:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:27.085 13:48:29 -- nvmf/common.sh@295 -- # e810=() 00:18:27.085 13:48:29 -- nvmf/common.sh@295 -- # local -ga e810 00:18:27.085 13:48:29 -- nvmf/common.sh@296 -- # x722=() 00:18:27.085 13:48:29 -- nvmf/common.sh@296 -- # local -ga x722 00:18:27.085 13:48:29 -- nvmf/common.sh@297 -- # mlx=() 00:18:27.085 13:48:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:27.085 13:48:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.085 13:48:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.085 13:48:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.085 13:48:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.085 13:48:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.085 13:48:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.085 13:48:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.085 13:48:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.085 13:48:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.085 13:48:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.085 13:48:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.085 13:48:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:27.085 13:48:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:27.085 13:48:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:27.085 13:48:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:27.085 13:48:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:27.085 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:27.085 13:48:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:27.085 13:48:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:27.085 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:27.085 13:48:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:27.085 13:48:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:27.085 13:48:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.085 13:48:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:27.085 13:48:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.085 13:48:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:27.085 Found net devices under 0000:86:00.0: cvl_0_0 00:18:27.085 13:48:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.085 13:48:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:27.085 13:48:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.085 13:48:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:27.085 13:48:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.085 13:48:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:27.085 Found net devices under 0000:86:00.1: cvl_0_1 00:18:27.085 13:48:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.085 13:48:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:27.085 13:48:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:27.085 13:48:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:27.085 13:48:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.085 13:48:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.085 13:48:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.085 13:48:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:27.085 13:48:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.085 13:48:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.085 13:48:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:27.085 13:48:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.085 13:48:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.085 13:48:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:27.085 13:48:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:27.085 13:48:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.085 13:48:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.085 13:48:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.085 13:48:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.085 13:48:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:27.085 13:48:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.085 13:48:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.085 13:48:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.085 13:48:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:27.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:18:27.085 00:18:27.085 --- 10.0.0.2 ping statistics --- 00:18:27.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.085 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:18:27.085 13:48:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:18:27.085 00:18:27.085 --- 10.0.0.1 ping statistics --- 00:18:27.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.085 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:18:27.085 13:48:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.085 13:48:29 -- nvmf/common.sh@410 -- # return 0 00:18:27.085 13:48:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:27.085 13:48:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.085 13:48:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:27.085 13:48:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.085 13:48:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:27.085 13:48:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:27.085 13:48:29 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:27.085 13:48:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:27.085 13:48:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:27.085 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:18:27.085 13:48:29 -- nvmf/common.sh@469 -- # nvmfpid=1584727 00:18:27.085 13:48:29 -- nvmf/common.sh@470 -- # waitforlisten 1584727 00:18:27.085 13:48:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:27.085 13:48:29 -- common/autotest_common.sh@819 -- # '[' -z 1584727 ']' 00:18:27.085 13:48:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.085 13:48:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:27.085 13:48:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.085 13:48:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:27.085 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:18:27.344 [2024-07-11 13:48:29.565137] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:27.344 [2024-07-11 13:48:29.565186] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.344 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.344 [2024-07-11 13:48:29.622695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.344 [2024-07-11 13:48:29.659998] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:27.344 [2024-07-11 13:48:29.660108] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.344 [2024-07-11 13:48:29.660116] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.344 [2024-07-11 13:48:29.660122] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.344 [2024-07-11 13:48:29.660146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.911 13:48:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:27.911 13:48:30 -- common/autotest_common.sh@852 -- # return 0 00:18:27.911 13:48:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:27.911 13:48:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:27.911 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:18:28.169 13:48:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.169 13:48:30 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:28.169 [2024-07-11 13:48:30.534172] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.169 13:48:30 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:28.169 13:48:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:28.169 13:48:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:28.169 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:18:28.169 ************************************ 00:18:28.169 START TEST lvs_grow_clean 00:18:28.169 ************************************ 00:18:28.169 13:48:30 -- common/autotest_common.sh@1104 -- # lvs_grow 00:18:28.169 13:48:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:28.169 13:48:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:28.169 13:48:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:28.169 13:48:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:28.169 13:48:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:28.169 13:48:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:28.169 13:48:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:28.169 13:48:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:28.169 13:48:30 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:28.428 13:48:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:28.428 13:48:30 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:28.686 13:48:30 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c7be92af-0ed4-497c-adb2-ba4cd6c9d91c 00:18:28.686 13:48:30 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7be92af-0ed4-497c-adb2-ba4cd6c9d91c 00:18:28.686 13:48:30 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:28.686 13:48:31 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:28.686 13:48:31 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:28.686 13:48:31 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c7be92af-0ed4-497c-adb2-ba4cd6c9d91c lvol 150 00:18:28.944 13:48:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b23211c9-fc75-4125-a2ff-13af1fd302cc 00:18:28.944 13:48:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:28.944 13:48:31 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:29.271 [2024-07-11 13:48:31.432329] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:29.271 [2024-07-11 13:48:31.432379] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:29.271 true 00:18:29.271 13:48:31 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7be92af-0ed4-497c-adb2-ba4cd6c9d91c 00:18:29.271 13:48:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:29.271 13:48:31 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:29.271 13:48:31 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:29.530 13:48:31 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b23211c9-fc75-4125-a2ff-13af1fd302cc 00:18:29.530 13:48:31 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:29.788 [2024-07-11 13:48:32.082326] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.788 13:48:32 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:30.047 13:48:32 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1585196 00:18:30.047 13:48:32 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:30.047 13:48:32 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:30.047 13:48:32 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1585196 /var/tmp/bdevperf.sock 00:18:30.047 13:48:32 -- common/autotest_common.sh@819 -- # '[' -z 1585196 ']' 00:18:30.047 13:48:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.047 13:48:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:30.047 13:48:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.047 13:48:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:30.047 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:18:30.047 [2024-07-11 13:48:32.292447] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:30.047 [2024-07-11 13:48:32.292494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585196 ] 00:18:30.047 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.047 [2024-07-11 13:48:32.346510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.047 [2024-07-11 13:48:32.385066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.982 13:48:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:30.982 13:48:33 -- common/autotest_common.sh@852 -- # return 0 00:18:30.982 13:48:33 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:30.982 Nvme0n1 00:18:30.982 13:48:33 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:31.240 [ 00:18:31.240 { 00:18:31.240 "name": "Nvme0n1", 00:18:31.240 "aliases": [ 00:18:31.240 "b23211c9-fc75-4125-a2ff-13af1fd302cc" 00:18:31.240 ], 00:18:31.240 "product_name": "NVMe disk", 00:18:31.240 "block_size": 4096, 00:18:31.240 "num_blocks": 38912, 00:18:31.240 "uuid": "b23211c9-fc75-4125-a2ff-13af1fd302cc", 00:18:31.240 "assigned_rate_limits": { 00:18:31.240 "rw_ios_per_sec": 0, 00:18:31.240 "rw_mbytes_per_sec": 0, 00:18:31.240 "r_mbytes_per_sec": 0, 00:18:31.240 "w_mbytes_per_sec": 0 00:18:31.240 }, 00:18:31.240 "claimed": false, 00:18:31.240 "zoned": false, 00:18:31.240 "supported_io_types": { 00:18:31.240 "read": true, 00:18:31.240 "write": true, 00:18:31.240 "unmap": true, 00:18:31.240 "write_zeroes": true, 00:18:31.240 "flush": true, 00:18:31.240 "reset": true, 00:18:31.240 "compare": true, 00:18:31.240 "compare_and_write": true, 00:18:31.240 "abort": true, 00:18:31.240 "nvme_admin": true, 00:18:31.240 "nvme_io": true 00:18:31.240 }, 00:18:31.240 "driver_specific": { 00:18:31.240 "nvme": [ 00:18:31.240 { 00:18:31.240 "trid": { 00:18:31.240 "trtype": "TCP", 00:18:31.240 "adrfam": "IPv4", 00:18:31.240 "traddr": "10.0.0.2", 00:18:31.240 "trsvcid": "4420", 00:18:31.240 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:31.240 }, 00:18:31.240 "ctrlr_data": { 00:18:31.240 "cntlid": 1, 00:18:31.240 "vendor_id": "0x8086", 00:18:31.240 "model_number": "SPDK bdev Controller", 00:18:31.240 "serial_number": "SPDK0", 00:18:31.240 "firmware_revision": "24.01.1", 00:18:31.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:31.240 "oacs": { 00:18:31.240 "security": 0, 00:18:31.240 "format": 0, 00:18:31.240 "firmware": 0, 00:18:31.240 "ns_manage": 0 00:18:31.240 }, 00:18:31.240 "multi_ctrlr": true, 00:18:31.240 "ana_reporting": false 00:18:31.240 }, 00:18:31.240 "vs": { 00:18:31.240 "nvme_version": "1.3" 00:18:31.240 }, 00:18:31.240 "ns_data": { 00:18:31.240 "id": 1, 00:18:31.240 "can_share": true 00:18:31.240 } 00:18:31.240 } 00:18:31.240 ], 00:18:31.240 "mp_policy": "active_passive" 00:18:31.240 } 00:18:31.240 } 00:18:31.240 ] 00:18:31.240 13:48:33 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:31.240 13:48:33 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1585372 00:18:31.240 13:48:33 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:31.240 Running I/O for 10 seconds... 00:18:32.174 Latency(us) 00:18:32.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.174 Nvme0n1 : 1.00 22902.00 89.46 0.00 0.00 0.00 0.00 0.00 00:18:32.174 =================================================================================================================== 00:18:32.174 Total : 22902.00 89.46 0.00 0.00 0.00 0.00 0.00 00:18:32.174 00:18:33.109 13:48:35 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c7be92af-0ed4-497c-adb2-ba4cd6c9d91c 00:18:33.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.368 Nvme0n1 : 2.00 22991.00 89.81 0.00 0.00 0.00 0.00 0.00 00:18:33.368 =================================================================================================================== 00:18:33.368 Total : 22991.00 89.81 0.00 0.00 0.00 0.00 0.00 00:18:33.368 00:18:33.368 true 00:18:33.368 13:48:35 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7be92af-0ed4-497c-adb2-ba4cd6c9d91c 00:18:33.368 13:48:35 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:33.626 13:48:35 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:33.626 13:48:35 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:33.626 13:48:35 -- target/nvmf_lvs_grow.sh@65 -- # wait 1585372 00:18:34.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.192 Nvme0n1 : 3.00 22999.33 89.84 0.00 0.00 0.00 0.00 0.00 00:18:34.192 =================================================================================================================== 00:18:34.192 Total : 22999.33 89.84 0.00 0.00 0.00 0.00 0.00 00:18:34.192 00:18:35.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.570 Nvme0n1 : 4.00 23067.50 90.11 0.00 0.00 0.00 0.00 0.00 00:18:35.570 =================================================================================================================== 00:18:35.570 Total : 23067.50 90.11 0.00 0.00 0.00 0.00 0.00 00:18:35.570 00:18:36.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.505 Nvme0n1 : 5.00 23106.80 90.26 0.00 0.00 0.00 0.00 0.00 00:18:36.505 =================================================================================================================== 00:18:36.505 Total : 23106.80 90.26 0.00 0.00 0.00 0.00 0.00 00:18:36.505 00:18:37.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.440 Nvme0n1 : 6.00 23141.00 90.39 0.00 0.00 0.00 0.00 0.00 00:18:37.440 =================================================================================================================== 00:18:37.440 Total : 23141.00 90.39 0.00 0.00 0.00 0.00 0.00 00:18:37.440 00:18:38.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:38.376 Nvme0n1 : 7.00 23128.86 90.35 0.00 0.00 0.00 0.00 0.00 00:18:38.376 =================================================================================================================== 00:18:38.376 Total : 23128.86 90.35 0.00 0.00 0.00 0.00 0.00 00:18:38.376 00:18:39.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:39.311 Nvme0n1 : 8.00 23160.75 90.47 0.00 0.00 0.00 0.00 0.00 00:18:39.311 =================================================================================================================== 00:18:39.311 Total : 23160.75 90.47 0.00 0.00 0.00 0.00 0.00 00:18:39.311 00:18:40.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.246 Nvme0n1 : 9.00 23182.89 90.56 0.00 0.00 0.00 0.00 0.00 00:18:40.246 =================================================================================================================== 00:18:40.246 Total : 23182.89 90.56 0.00 0.00 0.00 0.00 0.00 00:18:40.246 00:18:41.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:41.620 Nvme0n1 : 10.00 23197.40 90.61 0.00 0.00 0.00 0.00 0.00 00:18:41.620 =================================================================================================================== 00:18:41.620 Total : 23197.40 90.61 0.00 0.00 0.00 0.00 0.00 00:18:41.620 00:18:41.620 00:18:41.620 Latency(us) 00:18:41.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:41.620 Nvme0n1 : 10.01 23197.73 90.62 0.00 0.00 5513.94 3604.48 9175.04 00:18:41.620 =================================================================================================================== 00:18:41.620 Total : 23197.73 90.62 0.00 0.00 5513.94 3604.48 9175.04 00:18:41.620 0 00:18:41.620 13:48:43 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1585196 00:18:41.620 13:48:43 -- common/autotest_common.sh@926 -- # '[' -z 1585196 ']' 00:18:41.620 13:48:43 -- common/autotest_common.sh@930 -- # kill -0 1585196 00:18:41.620 13:48:43 -- common/autotest_common.sh@931 -- # uname 00:18:41.620 13:48:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:41.620 13:48:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1585196 00:18:41.620 13:48:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:41.620 13:48:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:41.620 13:48:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1585196' 00:18:41.620 killing process with pid 1585196 00:18:41.620 13:48:43 -- common/autotest_common.sh@945 -- # kill 1585196 00:18:41.620 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.620 00:18:41.620 Latency(us) 00:18:41.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.620 =================================================================================================================== 00:18:41.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.621 13:48:43 -- common/autotest_common.sh@950 -- # wait 1585196 00:18:41.621 13:48:43 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:41.621 13:48:44 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7be92af-0ed4-497c-adb2-ba4cd6c9d91c 00:18:41.621 13:48:44 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:41.878 13:48:44 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:41.878 13:48:44 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:41.878 13:48:44 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:42.136 [2024-07-11 13:48:44.392679] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:42.136 13:48:44 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7be92af-0ed4-497c-adb2-ba4cd6c9d91c 00:18:42.136 13:48:44 -- common/autotest_common.sh@640 -- # local es=0 00:18:42.136 13:48:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7be92af-0ed4-497c-adb2-ba4cd6c9d91c 00:18:42.136 13:48:44 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.136 13:48:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:42.136 13:48:44 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.136 13:48:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:42.136 13:48:44 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.136 13:48:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:42.136 13:48:44 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.136 13:48:44 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:42.136 13:48:44 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7be92af-0ed4-497c-adb2-ba4cd6c9d91c 00:18:42.136 request: 00:18:42.136 { 00:18:42.136 "uuid": "c7be92af-0ed4-497c-adb2-ba4cd6c9d91c", 00:18:42.136 "method": "bdev_lvol_get_lvstores", 00:18:42.136 "req_id": 1 00:18:42.136 } 00:18:42.137 Got JSON-RPC error response 00:18:42.137 response: 00:18:42.137 { 00:18:42.137 "code": -19, 00:18:42.137 "message": "No such device" 00:18:42.137 } 00:18:42.395 13:48:44 -- common/autotest_common.sh@643 -- # es=1 00:18:42.395 13:48:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:42.395 13:48:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:42.395 13:48:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:42.395 13:48:44 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:42.395 aio_bdev 00:18:42.395 13:48:44 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b23211c9-fc75-4125-a2ff-13af1fd302cc 00:18:42.395 13:48:44 -- common/autotest_common.sh@887 -- # local bdev_name=b23211c9-fc75-4125-a2ff-13af1fd302cc 00:18:42.395 13:48:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:42.395 13:48:44 -- common/autotest_common.sh@889 -- # local i 00:18:42.395 13:48:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:42.395 13:48:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:42.395 13:48:44 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:42.654 13:48:44 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b23211c9-fc75-4125-a2ff-13af1fd302cc -t 2000 00:18:42.654 [ 00:18:42.654 { 00:18:42.654 "name": "b23211c9-fc75-4125-a2ff-13af1fd302cc", 00:18:42.654 "aliases": [ 00:18:42.654 "lvs/lvol" 00:18:42.654 ], 00:18:42.654 "product_name": "Logical Volume", 00:18:42.654 "block_size": 4096, 00:18:42.654 "num_blocks": 38912, 00:18:42.654 "uuid": "b23211c9-fc75-4125-a2ff-13af1fd302cc", 00:18:42.654 "assigned_rate_limits": { 00:18:42.654 "rw_ios_per_sec": 0, 00:18:42.654 "rw_mbytes_per_sec": 0, 00:18:42.654 "r_mbytes_per_sec": 0, 00:18:42.654 "w_mbytes_per_sec": 0 00:18:42.654 }, 00:18:42.654 "claimed": false, 00:18:42.654 "zoned": false, 00:18:42.654 "supported_io_types": { 00:18:42.654 "read": true, 00:18:42.654 "write": true, 00:18:42.654 "unmap": true, 00:18:42.654 "write_zeroes": true, 00:18:42.654 "flush": false, 00:18:42.654 "reset": true, 00:18:42.654 "compare": false, 00:18:42.654 "compare_and_write": false, 00:18:42.654 "abort": false, 00:18:42.654 "nvme_admin": false, 00:18:42.654 "nvme_io": false 00:18:42.654 }, 00:18:42.654 "driver_specific": { 00:18:42.654 "lvol": { 00:18:42.654 "lvol_store_uuid": "c7be92af-0ed4-497c-adb2-ba4cd6c9d91c", 00:18:42.654 "base_bdev": "aio_bdev", 00:18:42.654 "thin_provision": false, 00:18:42.654 "snapshot": false, 00:18:42.654 "clone": false, 00:18:42.654 "esnap_clone": false 00:18:42.654 } 00:18:42.654 } 00:18:42.654 } 00:18:42.654 ] 00:18:42.654 13:48:45 -- common/autotest_common.sh@895 -- # return 0 00:18:42.654 13:48:45 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7be92af-0ed4-497c-adb2-ba4cd6c9d91c 00:18:42.654 13:48:45 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:42.912 13:48:45 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:42.912 13:48:45 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7be92af-0ed4-497c-adb2-ba4cd6c9d91c 00:18:42.912 13:48:45 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:43.169 13:48:45 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:43.169 13:48:45 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b23211c9-fc75-4125-a2ff-13af1fd302cc 00:18:43.169 13:48:45 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c7be92af-0ed4-497c-adb2-ba4cd6c9d91c 00:18:43.427 13:48:45 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:43.685 13:48:45 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:43.685 00:18:43.685 real 0m15.387s 00:18:43.685 user 0m15.064s 00:18:43.685 sys 0m1.371s 00:18:43.685 13:48:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:43.685 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:18:43.685 ************************************ 00:18:43.685 END TEST lvs_grow_clean 00:18:43.685 ************************************ 00:18:43.685 13:48:45 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:43.685 13:48:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:43.685 13:48:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:43.685 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:18:43.685 ************************************ 00:18:43.685 START TEST lvs_grow_dirty 00:18:43.685 ************************************ 00:18:43.685 13:48:45 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:18:43.685 13:48:45 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:43.685 13:48:45 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:43.685 13:48:45 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:43.685 13:48:45 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:43.685 13:48:45 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:43.685 13:48:45 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:43.685 13:48:45 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:43.685 13:48:45 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:43.685 13:48:45 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:43.944 13:48:46 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:43.944 13:48:46 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:43.944 13:48:46 -- target/nvmf_lvs_grow.sh@28 -- # lvs=6f3145c9-6796-4a28-abfc-684ff8603ad0 00:18:43.944 13:48:46 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:18:43.944 13:48:46 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:44.203 13:48:46 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:44.203 13:48:46 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:44.203 13:48:46 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 lvol 150 00:18:44.461 13:48:46 -- target/nvmf_lvs_grow.sh@33 -- # lvol=40bd1031-4559-43b3-ae74-369220ffba82 00:18:44.461 13:48:46 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:44.461 13:48:46 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:44.461 [2024-07-11 13:48:46.846502] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:44.461 [2024-07-11 13:48:46.846554] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:44.461 true 00:18:44.461 13:48:46 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:18:44.461 13:48:46 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:44.720 13:48:47 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:44.720 13:48:47 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:44.980 13:48:47 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 40bd1031-4559-43b3-ae74-369220ffba82 00:18:44.980 13:48:47 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:45.257 13:48:47 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:45.257 13:48:47 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1587792 00:18:45.257 13:48:47 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.257 13:48:47 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:45.257 13:48:47 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1587792 /var/tmp/bdevperf.sock 00:18:45.257 13:48:47 -- common/autotest_common.sh@819 -- # '[' -z 1587792 ']' 00:18:45.257 13:48:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.257 13:48:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:45.257 13:48:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.257 13:48:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:45.257 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:18:45.257 [2024-07-11 13:48:47.693257] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:45.257 [2024-07-11 13:48:47.693307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587792 ] 00:18:45.560 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.560 [2024-07-11 13:48:47.746560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.560 [2024-07-11 13:48:47.785098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.127 13:48:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:46.127 13:48:48 -- common/autotest_common.sh@852 -- # return 0 00:18:46.127 13:48:48 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:46.694 Nvme0n1 00:18:46.694 13:48:48 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:46.695 [ 00:18:46.695 { 00:18:46.695 "name": "Nvme0n1", 00:18:46.695 "aliases": [ 00:18:46.695 "40bd1031-4559-43b3-ae74-369220ffba82" 00:18:46.695 ], 00:18:46.695 "product_name": "NVMe disk", 00:18:46.695 "block_size": 4096, 00:18:46.695 "num_blocks": 38912, 00:18:46.695 "uuid": "40bd1031-4559-43b3-ae74-369220ffba82", 00:18:46.695 "assigned_rate_limits": { 00:18:46.695 "rw_ios_per_sec": 0, 00:18:46.695 "rw_mbytes_per_sec": 0, 00:18:46.695 "r_mbytes_per_sec": 0, 00:18:46.695 "w_mbytes_per_sec": 0 00:18:46.695 }, 00:18:46.695 "claimed": false, 00:18:46.695 "zoned": false, 00:18:46.695 "supported_io_types": { 00:18:46.695 "read": true, 00:18:46.695 "write": true, 00:18:46.695 "unmap": true, 00:18:46.695 "write_zeroes": true, 00:18:46.695 "flush": true, 00:18:46.695 "reset": true, 00:18:46.695 "compare": true, 00:18:46.695 "compare_and_write": true, 00:18:46.695 "abort": true, 00:18:46.695 "nvme_admin": true, 00:18:46.695 "nvme_io": true 00:18:46.695 }, 00:18:46.695 "driver_specific": { 00:18:46.695 "nvme": [ 00:18:46.695 { 00:18:46.695 "trid": { 00:18:46.695 "trtype": "TCP", 00:18:46.695 "adrfam": "IPv4", 00:18:46.695 "traddr": "10.0.0.2", 00:18:46.695 "trsvcid": "4420", 00:18:46.695 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:46.695 }, 00:18:46.695 "ctrlr_data": { 00:18:46.695 "cntlid": 1, 00:18:46.695 "vendor_id": "0x8086", 00:18:46.695 "model_number": "SPDK bdev Controller", 00:18:46.695 "serial_number": "SPDK0", 00:18:46.695 "firmware_revision": "24.01.1", 00:18:46.695 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:46.695 "oacs": { 00:18:46.695 "security": 0, 00:18:46.695 "format": 0, 00:18:46.695 "firmware": 0, 00:18:46.695 "ns_manage": 0 00:18:46.695 }, 00:18:46.695 "multi_ctrlr": true, 00:18:46.695 "ana_reporting": false 00:18:46.695 }, 00:18:46.695 "vs": { 00:18:46.695 "nvme_version": "1.3" 00:18:46.695 }, 00:18:46.695 "ns_data": { 00:18:46.695 "id": 1, 00:18:46.695 "can_share": true 00:18:46.695 } 00:18:46.695 } 00:18:46.695 ], 00:18:46.695 "mp_policy": "active_passive" 00:18:46.695 } 00:18:46.695 } 00:18:46.695 ] 00:18:46.695 13:48:49 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1588040 00:18:46.695 13:48:49 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:46.695 13:48:49 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:46.695 Running I/O for 10 seconds... 00:18:48.072 Latency(us) 00:18:48.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:48.072 Nvme0n1 : 1.00 23493.00 91.77 0.00 0.00 0.00 0.00 0.00 00:18:48.072 =================================================================================================================== 00:18:48.072 Total : 23493.00 91.77 0.00 0.00 0.00 0.00 0.00 00:18:48.072 00:18:48.641 13:48:51 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:18:48.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:48.900 Nvme0n1 : 2.00 23650.50 92.38 0.00 0.00 0.00 0.00 0.00 00:18:48.900 =================================================================================================================== 00:18:48.900 Total : 23650.50 92.38 0.00 0.00 0.00 0.00 0.00 00:18:48.900 00:18:48.900 true 00:18:48.900 13:48:51 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:18:48.900 13:48:51 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:49.159 13:48:51 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:49.159 13:48:51 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:49.159 13:48:51 -- target/nvmf_lvs_grow.sh@65 -- # wait 1588040 00:18:49.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:49.727 Nvme0n1 : 3.00 23721.00 92.66 0.00 0.00 0.00 0.00 0.00 00:18:49.727 =================================================================================================================== 00:18:49.727 Total : 23721.00 92.66 0.00 0.00 0.00 0.00 0.00 00:18:49.727 00:18:51.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.121 Nvme0n1 : 4.00 23806.50 92.99 0.00 0.00 0.00 0.00 0.00 00:18:51.121 =================================================================================================================== 00:18:51.121 Total : 23806.50 92.99 0.00 0.00 0.00 0.00 0.00 00:18:51.121 00:18:52.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.056 Nvme0n1 : 5.00 23861.40 93.21 0.00 0.00 0.00 0.00 0.00 00:18:52.056 =================================================================================================================== 00:18:52.056 Total : 23861.40 93.21 0.00 0.00 0.00 0.00 0.00 00:18:52.056 00:18:52.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.992 Nvme0n1 : 6.00 23913.67 93.41 0.00 0.00 0.00 0.00 0.00 00:18:52.992 =================================================================================================================== 00:18:52.992 Total : 23913.67 93.41 0.00 0.00 0.00 0.00 0.00 00:18:52.992 00:18:53.929 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.929 Nvme0n1 : 7.00 23946.57 93.54 0.00 0.00 0.00 0.00 0.00 00:18:53.929 =================================================================================================================== 00:18:53.929 Total : 23946.57 93.54 0.00 0.00 0.00 0.00 0.00 00:18:53.929 00:18:54.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.867 Nvme0n1 : 8.00 23975.12 93.65 0.00 0.00 0.00 0.00 0.00 00:18:54.867 =================================================================================================================== 00:18:54.867 Total : 23975.12 93.65 0.00 0.00 0.00 0.00 0.00 00:18:54.867 00:18:55.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.804 Nvme0n1 : 9.00 23999.11 93.75 0.00 0.00 0.00 0.00 0.00 00:18:55.804 =================================================================================================================== 00:18:55.804 Total : 23999.11 93.75 0.00 0.00 0.00 0.00 0.00 00:18:55.804 00:18:56.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:56.737 Nvme0n1 : 10.00 24012.20 93.80 0.00 0.00 0.00 0.00 0.00 00:18:56.737 =================================================================================================================== 00:18:56.737 Total : 24012.20 93.80 0.00 0.00 0.00 0.00 0.00 00:18:56.737 00:18:56.737 00:18:56.737 Latency(us) 00:18:56.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:56.737 Nvme0n1 : 10.01 24012.51 93.80 0.00 0.00 5327.54 3219.81 14816.83 00:18:56.737 =================================================================================================================== 00:18:56.737 Total : 24012.51 93.80 0.00 0.00 5327.54 3219.81 14816.83 00:18:56.737 0 00:18:56.994 13:48:59 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1587792 00:18:56.994 13:48:59 -- common/autotest_common.sh@926 -- # '[' -z 1587792 ']' 00:18:56.994 13:48:59 -- common/autotest_common.sh@930 -- # kill -0 1587792 00:18:56.994 13:48:59 -- common/autotest_common.sh@931 -- # uname 00:18:56.994 13:48:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:56.994 13:48:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1587792 00:18:56.994 13:48:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:56.994 13:48:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:56.994 13:48:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1587792' 00:18:56.994 killing process with pid 1587792 00:18:56.994 13:48:59 -- common/autotest_common.sh@945 -- # kill 1587792 00:18:56.994 Received shutdown signal, test time was about 10.000000 seconds 00:18:56.994 00:18:56.994 Latency(us) 00:18:56.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.994 =================================================================================================================== 00:18:56.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.994 13:48:59 -- common/autotest_common.sh@950 -- # wait 1587792 00:18:56.994 13:48:59 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:57.256 13:48:59 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:18:57.256 13:48:59 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:57.514 13:48:59 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:57.514 13:48:59 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:57.514 13:48:59 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1584727 00:18:57.514 13:48:59 -- target/nvmf_lvs_grow.sh@74 -- # wait 1584727 00:18:57.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1584727 Killed "${NVMF_APP[@]}" "$@" 00:18:57.514 13:48:59 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:57.514 13:48:59 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:57.514 13:48:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:57.514 13:48:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:57.514 13:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:57.514 13:48:59 -- nvmf/common.sh@469 -- # nvmfpid=1589789 00:18:57.514 13:48:59 -- nvmf/common.sh@470 -- # waitforlisten 1589789 00:18:57.514 13:48:59 -- common/autotest_common.sh@819 -- # '[' -z 1589789 ']' 00:18:57.514 13:48:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.514 13:48:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:57.514 13:48:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.514 13:48:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:57.514 13:48:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:57.514 13:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:57.514 [2024-07-11 13:48:59.863341] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:57.515 [2024-07-11 13:48:59.863385] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.515 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.515 [2024-07-11 13:48:59.920716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.515 [2024-07-11 13:48:59.958869] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:57.515 [2024-07-11 13:48:59.958989] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.515 [2024-07-11 13:48:59.958996] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.515 [2024-07-11 13:48:59.959003] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.515 [2024-07-11 13:48:59.959024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.452 13:49:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:58.452 13:49:00 -- common/autotest_common.sh@852 -- # return 0 00:18:58.452 13:49:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:58.452 13:49:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:58.452 13:49:00 -- common/autotest_common.sh@10 -- # set +x 00:18:58.452 13:49:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.452 13:49:00 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:58.452 [2024-07-11 13:49:00.834945] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:58.452 [2024-07-11 13:49:00.835046] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:58.452 [2024-07-11 13:49:00.835072] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:58.452 13:49:00 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:58.452 13:49:00 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 40bd1031-4559-43b3-ae74-369220ffba82 00:18:58.452 13:49:00 -- common/autotest_common.sh@887 -- # local bdev_name=40bd1031-4559-43b3-ae74-369220ffba82 00:18:58.452 13:49:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:58.452 13:49:00 -- common/autotest_common.sh@889 -- # local i 00:18:58.452 13:49:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:58.452 13:49:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:58.452 13:49:00 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:58.710 13:49:01 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 40bd1031-4559-43b3-ae74-369220ffba82 -t 2000 00:18:58.969 [ 00:18:58.969 { 00:18:58.969 "name": "40bd1031-4559-43b3-ae74-369220ffba82", 00:18:58.969 "aliases": [ 00:18:58.969 "lvs/lvol" 00:18:58.969 ], 00:18:58.969 "product_name": "Logical Volume", 00:18:58.969 "block_size": 4096, 00:18:58.969 "num_blocks": 38912, 00:18:58.969 "uuid": "40bd1031-4559-43b3-ae74-369220ffba82", 00:18:58.969 "assigned_rate_limits": { 00:18:58.969 "rw_ios_per_sec": 0, 00:18:58.969 "rw_mbytes_per_sec": 0, 00:18:58.969 "r_mbytes_per_sec": 0, 00:18:58.969 "w_mbytes_per_sec": 0 00:18:58.969 }, 00:18:58.969 "claimed": false, 00:18:58.969 "zoned": false, 00:18:58.969 "supported_io_types": { 00:18:58.969 "read": true, 00:18:58.969 "write": true, 00:18:58.969 "unmap": true, 00:18:58.969 "write_zeroes": true, 00:18:58.969 "flush": false, 00:18:58.969 "reset": true, 00:18:58.969 "compare": false, 00:18:58.969 "compare_and_write": false, 00:18:58.969 "abort": false, 00:18:58.969 "nvme_admin": false, 00:18:58.969 "nvme_io": false 00:18:58.969 }, 00:18:58.969 "driver_specific": { 00:18:58.969 "lvol": { 00:18:58.969 "lvol_store_uuid": "6f3145c9-6796-4a28-abfc-684ff8603ad0", 00:18:58.969 "base_bdev": "aio_bdev", 00:18:58.969 "thin_provision": false, 00:18:58.969 "snapshot": false, 00:18:58.969 "clone": false, 00:18:58.969 "esnap_clone": false 00:18:58.969 } 00:18:58.969 } 00:18:58.969 } 00:18:58.969 ] 00:18:58.969 13:49:01 -- common/autotest_common.sh@895 -- # return 0 00:18:58.969 13:49:01 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:18:58.969 13:49:01 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:58.969 13:49:01 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:58.969 13:49:01 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:18:58.969 13:49:01 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:59.227 13:49:01 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:59.227 13:49:01 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:59.227 [2024-07-11 13:49:01.679484] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:59.486 13:49:01 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:18:59.486 13:49:01 -- common/autotest_common.sh@640 -- # local es=0 00:18:59.486 13:49:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:18:59.486 13:49:01 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.486 13:49:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:59.486 13:49:01 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.486 13:49:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:59.486 13:49:01 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.486 13:49:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:59.486 13:49:01 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.486 13:49:01 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:59.486 13:49:01 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:18:59.486 request: 00:18:59.486 { 00:18:59.486 "uuid": "6f3145c9-6796-4a28-abfc-684ff8603ad0", 00:18:59.486 "method": "bdev_lvol_get_lvstores", 00:18:59.486 "req_id": 1 00:18:59.486 } 00:18:59.486 Got JSON-RPC error response 00:18:59.486 response: 00:18:59.486 { 00:18:59.486 "code": -19, 00:18:59.486 "message": "No such device" 00:18:59.486 } 00:18:59.486 13:49:01 -- common/autotest_common.sh@643 -- # es=1 00:18:59.486 13:49:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:59.486 13:49:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:59.486 13:49:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:59.486 13:49:01 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:59.745 aio_bdev 00:18:59.745 13:49:02 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 40bd1031-4559-43b3-ae74-369220ffba82 00:18:59.745 13:49:02 -- common/autotest_common.sh@887 -- # local bdev_name=40bd1031-4559-43b3-ae74-369220ffba82 00:18:59.745 13:49:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:59.745 13:49:02 -- common/autotest_common.sh@889 -- # local i 00:18:59.745 13:49:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:59.745 13:49:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:59.745 13:49:02 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:00.004 13:49:02 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 40bd1031-4559-43b3-ae74-369220ffba82 -t 2000 00:19:00.004 [ 00:19:00.004 { 00:19:00.004 "name": "40bd1031-4559-43b3-ae74-369220ffba82", 00:19:00.004 "aliases": [ 00:19:00.004 "lvs/lvol" 00:19:00.004 ], 00:19:00.004 "product_name": "Logical Volume", 00:19:00.004 "block_size": 4096, 00:19:00.004 "num_blocks": 38912, 00:19:00.004 "uuid": "40bd1031-4559-43b3-ae74-369220ffba82", 00:19:00.004 "assigned_rate_limits": { 00:19:00.004 "rw_ios_per_sec": 0, 00:19:00.004 "rw_mbytes_per_sec": 0, 00:19:00.004 "r_mbytes_per_sec": 0, 00:19:00.004 "w_mbytes_per_sec": 0 00:19:00.004 }, 00:19:00.004 "claimed": false, 00:19:00.004 "zoned": false, 00:19:00.004 "supported_io_types": { 00:19:00.004 "read": true, 00:19:00.004 "write": true, 00:19:00.004 "unmap": true, 00:19:00.004 "write_zeroes": true, 00:19:00.004 "flush": false, 00:19:00.004 "reset": true, 00:19:00.004 "compare": false, 00:19:00.004 "compare_and_write": false, 00:19:00.004 "abort": false, 00:19:00.004 "nvme_admin": false, 00:19:00.004 "nvme_io": false 00:19:00.004 }, 00:19:00.004 "driver_specific": { 00:19:00.004 "lvol": { 00:19:00.004 "lvol_store_uuid": "6f3145c9-6796-4a28-abfc-684ff8603ad0", 00:19:00.004 "base_bdev": "aio_bdev", 00:19:00.004 "thin_provision": false, 00:19:00.004 "snapshot": false, 00:19:00.004 "clone": false, 00:19:00.004 "esnap_clone": false 00:19:00.004 } 00:19:00.004 } 00:19:00.004 } 00:19:00.004 ] 00:19:00.004 13:49:02 -- common/autotest_common.sh@895 -- # return 0 00:19:00.004 13:49:02 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:19:00.004 13:49:02 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:00.263 13:49:02 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:00.263 13:49:02 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:19:00.263 13:49:02 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:00.521 13:49:02 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:00.521 13:49:02 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 40bd1031-4559-43b3-ae74-369220ffba82 00:19:00.521 13:49:02 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f3145c9-6796-4a28-abfc-684ff8603ad0 00:19:00.780 13:49:03 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:01.038 13:49:03 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:01.038 00:19:01.038 real 0m17.327s 00:19:01.038 user 0m44.164s 00:19:01.038 sys 0m3.676s 00:19:01.038 13:49:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:01.038 13:49:03 -- common/autotest_common.sh@10 -- # set +x 00:19:01.038 ************************************ 00:19:01.039 END TEST lvs_grow_dirty 00:19:01.039 ************************************ 00:19:01.039 13:49:03 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:01.039 13:49:03 -- common/autotest_common.sh@796 -- # type=--id 00:19:01.039 13:49:03 -- common/autotest_common.sh@797 -- # id=0 00:19:01.039 13:49:03 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:01.039 13:49:03 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:01.039 13:49:03 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:01.039 13:49:03 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:01.039 13:49:03 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:01.039 13:49:03 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:01.039 nvmf_trace.0 00:19:01.039 13:49:03 -- common/autotest_common.sh@811 -- # return 0 00:19:01.039 13:49:03 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:01.039 13:49:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:01.039 13:49:03 -- nvmf/common.sh@116 -- # sync 00:19:01.039 13:49:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:01.039 13:49:03 -- nvmf/common.sh@119 -- # set +e 00:19:01.039 13:49:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:01.039 13:49:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:01.039 rmmod nvme_tcp 00:19:01.039 rmmod nvme_fabrics 00:19:01.039 rmmod nvme_keyring 00:19:01.039 13:49:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:01.039 13:49:03 -- nvmf/common.sh@123 -- # set -e 00:19:01.039 13:49:03 -- nvmf/common.sh@124 -- # return 0 00:19:01.039 13:49:03 -- nvmf/common.sh@477 -- # '[' -n 1589789 ']' 00:19:01.039 13:49:03 -- nvmf/common.sh@478 -- # killprocess 1589789 00:19:01.039 13:49:03 -- common/autotest_common.sh@926 -- # '[' -z 1589789 ']' 00:19:01.039 13:49:03 -- common/autotest_common.sh@930 -- # kill -0 1589789 00:19:01.039 13:49:03 -- common/autotest_common.sh@931 -- # uname 00:19:01.039 13:49:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:01.039 13:49:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1589789 00:19:01.362 13:49:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:01.362 13:49:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:01.362 13:49:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1589789' 00:19:01.362 killing process with pid 1589789 00:19:01.362 13:49:03 -- common/autotest_common.sh@945 -- # kill 1589789 00:19:01.362 13:49:03 -- common/autotest_common.sh@950 -- # wait 1589789 00:19:01.362 13:49:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:01.362 13:49:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:01.362 13:49:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:01.362 13:49:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:01.362 13:49:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:01.362 13:49:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.362 13:49:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.362 13:49:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.299 13:49:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:03.299 00:19:03.299 real 0m41.686s 00:19:03.299 user 1m4.766s 00:19:03.299 sys 0m9.476s 00:19:03.299 13:49:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:03.299 13:49:05 -- common/autotest_common.sh@10 -- # set +x 00:19:03.299 ************************************ 00:19:03.299 END TEST nvmf_lvs_grow 00:19:03.299 ************************************ 00:19:03.558 13:49:05 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:03.558 13:49:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:03.558 13:49:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:03.558 13:49:05 -- common/autotest_common.sh@10 -- # set +x 00:19:03.558 ************************************ 00:19:03.558 START TEST nvmf_bdev_io_wait 00:19:03.558 ************************************ 00:19:03.558 13:49:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:03.558 * Looking for test storage... 00:19:03.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.558 13:49:05 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.558 13:49:05 -- nvmf/common.sh@7 -- # uname -s 00:19:03.558 13:49:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.558 13:49:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.558 13:49:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.558 13:49:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.558 13:49:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.558 13:49:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.558 13:49:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.558 13:49:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.558 13:49:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.558 13:49:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.558 13:49:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:03.558 13:49:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:03.558 13:49:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.558 13:49:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.558 13:49:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.558 13:49:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.558 13:49:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.558 13:49:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.558 13:49:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.558 13:49:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.558 13:49:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.558 13:49:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.558 13:49:05 -- paths/export.sh@5 -- # export PATH 00:19:03.558 13:49:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.558 13:49:05 -- nvmf/common.sh@46 -- # : 0 00:19:03.558 13:49:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:03.558 13:49:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:03.558 13:49:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:03.558 13:49:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.558 13:49:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.558 13:49:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:03.558 13:49:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:03.558 13:49:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:03.558 13:49:05 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.558 13:49:05 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.558 13:49:05 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:03.558 13:49:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:03.558 13:49:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.558 13:49:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:03.558 13:49:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:03.558 13:49:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:03.558 13:49:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.558 13:49:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.558 13:49:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.558 13:49:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:03.558 13:49:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:03.558 13:49:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:03.558 13:49:05 -- common/autotest_common.sh@10 -- # set +x 00:19:08.830 13:49:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:08.830 13:49:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:08.830 13:49:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:08.830 13:49:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:08.830 13:49:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:08.830 13:49:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:08.830 13:49:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:08.830 13:49:10 -- nvmf/common.sh@294 -- # net_devs=() 00:19:08.830 13:49:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:08.830 13:49:10 -- nvmf/common.sh@295 -- # e810=() 00:19:08.830 13:49:10 -- nvmf/common.sh@295 -- # local -ga e810 00:19:08.830 13:49:10 -- nvmf/common.sh@296 -- # x722=() 00:19:08.830 13:49:10 -- nvmf/common.sh@296 -- # local -ga x722 00:19:08.830 13:49:10 -- nvmf/common.sh@297 -- # mlx=() 00:19:08.830 13:49:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:08.830 13:49:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.830 13:49:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.830 13:49:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.830 13:49:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.830 13:49:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.830 13:49:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.830 13:49:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.830 13:49:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.830 13:49:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.831 13:49:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.831 13:49:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.831 13:49:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:08.831 13:49:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:08.831 13:49:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:08.831 13:49:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:08.831 13:49:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:08.831 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:08.831 13:49:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:08.831 13:49:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:08.831 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:08.831 13:49:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:08.831 13:49:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:08.831 13:49:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.831 13:49:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:08.831 13:49:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.831 13:49:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:08.831 Found net devices under 0000:86:00.0: cvl_0_0 00:19:08.831 13:49:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.831 13:49:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:08.831 13:49:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.831 13:49:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:08.831 13:49:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.831 13:49:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:08.831 Found net devices under 0000:86:00.1: cvl_0_1 00:19:08.831 13:49:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.831 13:49:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:08.831 13:49:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:08.831 13:49:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:08.831 13:49:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:08.831 13:49:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.831 13:49:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.831 13:49:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.831 13:49:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:08.831 13:49:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.831 13:49:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.831 13:49:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:08.831 13:49:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.831 13:49:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.831 13:49:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:08.831 13:49:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:08.831 13:49:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.831 13:49:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.831 13:49:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.831 13:49:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.831 13:49:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:08.831 13:49:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.831 13:49:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.831 13:49:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.831 13:49:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:08.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:19:08.831 00:19:08.831 --- 10.0.0.2 ping statistics --- 00:19:08.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.831 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:19:08.831 13:49:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:19:08.831 00:19:08.831 --- 10.0.0.1 ping statistics --- 00:19:08.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.831 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:19:08.831 13:49:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.831 13:49:11 -- nvmf/common.sh@410 -- # return 0 00:19:08.831 13:49:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:08.831 13:49:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.831 13:49:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:08.831 13:49:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:08.831 13:49:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.831 13:49:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:08.831 13:49:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:08.831 13:49:11 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:08.831 13:49:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:08.831 13:49:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:08.831 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:19:08.831 13:49:11 -- nvmf/common.sh@469 -- # nvmfpid=1593854 00:19:08.831 13:49:11 -- nvmf/common.sh@470 -- # waitforlisten 1593854 00:19:08.831 13:49:11 -- common/autotest_common.sh@819 -- # '[' -z 1593854 ']' 00:19:08.831 13:49:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.831 13:49:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:08.831 13:49:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.831 13:49:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:08.831 13:49:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:08.831 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:19:08.831 [2024-07-11 13:49:11.128743] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:08.831 [2024-07-11 13:49:11.128785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.831 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.831 [2024-07-11 13:49:11.185372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.831 [2024-07-11 13:49:11.226535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:08.831 [2024-07-11 13:49:11.226643] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.831 [2024-07-11 13:49:11.226651] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.831 [2024-07-11 13:49:11.226656] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.831 [2024-07-11 13:49:11.226694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.831 [2024-07-11 13:49:11.226805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.831 [2024-07-11 13:49:11.226890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.831 [2024-07-11 13:49:11.226891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.831 13:49:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:08.831 13:49:11 -- common/autotest_common.sh@852 -- # return 0 00:19:08.831 13:49:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:08.831 13:49:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:08.831 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:19:09.089 13:49:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.089 13:49:11 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:09.089 13:49:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.089 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:19:09.089 13:49:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.089 13:49:11 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:09.089 13:49:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.089 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:19:09.089 13:49:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.089 13:49:11 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:09.089 13:49:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.089 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:19:09.089 [2024-07-11 13:49:11.355562] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.089 13:49:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.089 13:49:11 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:09.089 13:49:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.089 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:19:09.089 Malloc0 00:19:09.089 13:49:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.089 13:49:11 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:09.089 13:49:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.089 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:19:09.089 13:49:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.089 13:49:11 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.089 13:49:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.089 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:19:09.090 13:49:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.090 13:49:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.090 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:19:09.090 [2024-07-11 13:49:11.420087] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.090 13:49:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1593972 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@30 -- # READ_PID=1593975 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1593976 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:09.090 13:49:11 -- nvmf/common.sh@520 -- # config=() 00:19:09.090 13:49:11 -- nvmf/common.sh@520 -- # local subsystem config 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1593979 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@35 -- # sync 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:09.090 13:49:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:09.090 13:49:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:09.090 { 00:19:09.090 "params": { 00:19:09.090 "name": "Nvme$subsystem", 00:19:09.090 "trtype": "$TEST_TRANSPORT", 00:19:09.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.090 "adrfam": "ipv4", 00:19:09.090 "trsvcid": "$NVMF_PORT", 00:19:09.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.090 "hdgst": ${hdgst:-false}, 00:19:09.090 "ddgst": ${ddgst:-false} 00:19:09.090 }, 00:19:09.090 "method": "bdev_nvme_attach_controller" 00:19:09.090 } 00:19:09.090 EOF 00:19:09.090 )") 00:19:09.090 13:49:11 -- nvmf/common.sh@520 -- # config=() 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:09.090 13:49:11 -- nvmf/common.sh@520 -- # local subsystem config 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:09.090 13:49:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:09.090 13:49:11 -- nvmf/common.sh@520 -- # config=() 00:19:09.090 13:49:11 -- nvmf/common.sh@520 -- # config=() 00:19:09.090 13:49:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:09.090 { 00:19:09.090 "params": { 00:19:09.090 "name": "Nvme$subsystem", 00:19:09.090 "trtype": "$TEST_TRANSPORT", 00:19:09.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.090 "adrfam": "ipv4", 00:19:09.090 "trsvcid": "$NVMF_PORT", 00:19:09.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.090 "hdgst": ${hdgst:-false}, 00:19:09.090 "ddgst": ${ddgst:-false} 00:19:09.090 }, 00:19:09.090 "method": "bdev_nvme_attach_controller" 00:19:09.090 } 00:19:09.090 EOF 00:19:09.090 )") 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:09.090 13:49:11 -- nvmf/common.sh@520 -- # local subsystem config 00:19:09.090 13:49:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:09.090 13:49:11 -- nvmf/common.sh@520 -- # local subsystem config 00:19:09.090 13:49:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:09.090 { 00:19:09.090 "params": { 00:19:09.090 "name": "Nvme$subsystem", 00:19:09.090 "trtype": "$TEST_TRANSPORT", 00:19:09.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.090 "adrfam": "ipv4", 00:19:09.090 "trsvcid": "$NVMF_PORT", 00:19:09.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.090 "hdgst": ${hdgst:-false}, 00:19:09.090 "ddgst": ${ddgst:-false} 00:19:09.090 }, 00:19:09.090 "method": "bdev_nvme_attach_controller" 00:19:09.090 } 00:19:09.090 EOF 00:19:09.090 )") 00:19:09.090 13:49:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:09.090 13:49:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:09.090 { 00:19:09.090 "params": { 00:19:09.090 "name": "Nvme$subsystem", 00:19:09.090 "trtype": "$TEST_TRANSPORT", 00:19:09.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.090 "adrfam": "ipv4", 00:19:09.090 "trsvcid": "$NVMF_PORT", 00:19:09.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.090 "hdgst": ${hdgst:-false}, 00:19:09.090 "ddgst": ${ddgst:-false} 00:19:09.090 }, 00:19:09.090 "method": "bdev_nvme_attach_controller" 00:19:09.090 } 00:19:09.090 EOF 00:19:09.090 )") 00:19:09.090 13:49:11 -- nvmf/common.sh@542 -- # cat 00:19:09.090 13:49:11 -- target/bdev_io_wait.sh@37 -- # wait 1593972 00:19:09.090 13:49:11 -- nvmf/common.sh@542 -- # cat 00:19:09.090 13:49:11 -- nvmf/common.sh@542 -- # cat 00:19:09.090 13:49:11 -- nvmf/common.sh@542 -- # cat 00:19:09.090 13:49:11 -- nvmf/common.sh@544 -- # jq . 00:19:09.090 13:49:11 -- nvmf/common.sh@544 -- # jq . 00:19:09.090 13:49:11 -- nvmf/common.sh@544 -- # jq . 00:19:09.090 13:49:11 -- nvmf/common.sh@544 -- # jq . 00:19:09.090 13:49:11 -- nvmf/common.sh@545 -- # IFS=, 00:19:09.090 13:49:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:09.090 "params": { 00:19:09.090 "name": "Nvme1", 00:19:09.090 "trtype": "tcp", 00:19:09.090 "traddr": "10.0.0.2", 00:19:09.090 "adrfam": "ipv4", 00:19:09.090 "trsvcid": "4420", 00:19:09.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.090 "hdgst": false, 00:19:09.090 "ddgst": false 00:19:09.090 }, 00:19:09.090 "method": "bdev_nvme_attach_controller" 00:19:09.090 }' 00:19:09.090 13:49:11 -- nvmf/common.sh@545 -- # IFS=, 00:19:09.090 13:49:11 -- nvmf/common.sh@545 -- # IFS=, 00:19:09.090 13:49:11 -- nvmf/common.sh@545 -- # IFS=, 00:19:09.090 13:49:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:09.090 "params": { 00:19:09.090 "name": "Nvme1", 00:19:09.090 "trtype": "tcp", 00:19:09.090 "traddr": "10.0.0.2", 00:19:09.090 "adrfam": "ipv4", 00:19:09.090 "trsvcid": "4420", 00:19:09.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.090 "hdgst": false, 00:19:09.090 "ddgst": false 00:19:09.090 }, 00:19:09.090 "method": "bdev_nvme_attach_controller" 00:19:09.090 }' 00:19:09.090 13:49:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:09.090 "params": { 00:19:09.090 "name": "Nvme1", 00:19:09.090 "trtype": "tcp", 00:19:09.090 "traddr": "10.0.0.2", 00:19:09.090 "adrfam": "ipv4", 00:19:09.090 "trsvcid": "4420", 00:19:09.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.090 "hdgst": false, 00:19:09.090 "ddgst": false 00:19:09.090 }, 00:19:09.090 "method": "bdev_nvme_attach_controller" 00:19:09.090 }' 00:19:09.090 13:49:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:09.090 "params": { 00:19:09.090 "name": "Nvme1", 00:19:09.090 "trtype": "tcp", 00:19:09.090 "traddr": "10.0.0.2", 00:19:09.090 "adrfam": "ipv4", 00:19:09.090 "trsvcid": "4420", 00:19:09.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.090 "hdgst": false, 00:19:09.090 "ddgst": false 00:19:09.090 }, 00:19:09.090 "method": "bdev_nvme_attach_controller" 00:19:09.090 }' 00:19:09.090 [2024-07-11 13:49:11.465289] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:09.090 [2024-07-11 13:49:11.465343] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:09.090 [2024-07-11 13:49:11.469719] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:09.090 [2024-07-11 13:49:11.469768] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:09.090 [2024-07-11 13:49:11.471084] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:09.090 [2024-07-11 13:49:11.471091] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:09.090 [2024-07-11 13:49:11.471124] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:09.090 [2024-07-11 13:49:11.471129] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:09.090 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.348 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.348 [2024-07-11 13:49:11.640012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.348 [2024-07-11 13:49:11.665350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:09.348 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.348 [2024-07-11 13:49:11.738657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.348 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.348 [2024-07-11 13:49:11.770902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:09.348 [2024-07-11 13:49:11.791914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.607 [2024-07-11 13:49:11.814973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:09.607 [2024-07-11 13:49:11.884426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.607 [2024-07-11 13:49:11.913876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:09.607 Running I/O for 1 seconds... 00:19:09.607 Running I/O for 1 seconds... 00:19:09.865 Running I/O for 1 seconds... 00:19:09.865 Running I/O for 1 seconds... 00:19:10.800 00:19:10.800 Latency(us) 00:19:10.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.800 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:10.800 Nvme1n1 : 1.01 8242.65 32.20 0.00 0.00 15425.67 6154.69 26556.33 00:19:10.800 =================================================================================================================== 00:19:10.800 Total : 8242.65 32.20 0.00 0.00 15425.67 6154.69 26556.33 00:19:10.800 00:19:10.800 Latency(us) 00:19:10.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.800 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:10.800 Nvme1n1 : 1.01 12121.86 47.35 0.00 0.00 10525.40 5869.75 22453.20 00:19:10.800 =================================================================================================================== 00:19:10.800 Total : 12121.86 47.35 0.00 0.00 10525.40 5869.75 22453.20 00:19:10.800 00:19:10.800 Latency(us) 00:19:10.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.800 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:10.800 Nvme1n1 : 1.00 8702.05 33.99 0.00 0.00 14678.23 3647.22 39663.53 00:19:10.800 =================================================================================================================== 00:19:10.800 Total : 8702.05 33.99 0.00 0.00 14678.23 3647.22 39663.53 00:19:10.800 00:19:10.800 Latency(us) 00:19:10.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.800 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:10.800 Nvme1n1 : 1.00 250925.50 980.18 0.00 0.00 508.14 209.25 648.24 00:19:10.800 =================================================================================================================== 00:19:10.800 Total : 250925.50 980.18 0.00 0.00 508.14 209.25 648.24 00:19:11.060 13:49:13 -- target/bdev_io_wait.sh@38 -- # wait 1593975 00:19:11.060 13:49:13 -- target/bdev_io_wait.sh@39 -- # wait 1593976 00:19:11.060 13:49:13 -- target/bdev_io_wait.sh@40 -- # wait 1593979 00:19:11.060 13:49:13 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.060 13:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:11.060 13:49:13 -- common/autotest_common.sh@10 -- # set +x 00:19:11.060 13:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:11.060 13:49:13 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:11.060 13:49:13 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:11.060 13:49:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:11.060 13:49:13 -- nvmf/common.sh@116 -- # sync 00:19:11.060 13:49:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:11.060 13:49:13 -- nvmf/common.sh@119 -- # set +e 00:19:11.060 13:49:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:11.060 13:49:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:11.060 rmmod nvme_tcp 00:19:11.060 rmmod nvme_fabrics 00:19:11.060 rmmod nvme_keyring 00:19:11.060 13:49:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:11.060 13:49:13 -- nvmf/common.sh@123 -- # set -e 00:19:11.060 13:49:13 -- nvmf/common.sh@124 -- # return 0 00:19:11.060 13:49:13 -- nvmf/common.sh@477 -- # '[' -n 1593854 ']' 00:19:11.060 13:49:13 -- nvmf/common.sh@478 -- # killprocess 1593854 00:19:11.060 13:49:13 -- common/autotest_common.sh@926 -- # '[' -z 1593854 ']' 00:19:11.060 13:49:13 -- common/autotest_common.sh@930 -- # kill -0 1593854 00:19:11.060 13:49:13 -- common/autotest_common.sh@931 -- # uname 00:19:11.060 13:49:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:11.060 13:49:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1593854 00:19:11.322 13:49:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:11.322 13:49:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:11.322 13:49:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1593854' 00:19:11.322 killing process with pid 1593854 00:19:11.322 13:49:13 -- common/autotest_common.sh@945 -- # kill 1593854 00:19:11.322 13:49:13 -- common/autotest_common.sh@950 -- # wait 1593854 00:19:11.322 13:49:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:11.322 13:49:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:11.322 13:49:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:11.322 13:49:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.322 13:49:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:11.322 13:49:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.322 13:49:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.322 13:49:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.854 13:49:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:13.854 00:19:13.854 real 0m10.001s 00:19:13.854 user 0m16.567s 00:19:13.854 sys 0m5.464s 00:19:13.854 13:49:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.854 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:19:13.854 ************************************ 00:19:13.854 END TEST nvmf_bdev_io_wait 00:19:13.854 ************************************ 00:19:13.854 13:49:15 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:13.854 13:49:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:13.854 13:49:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:13.854 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:19:13.854 ************************************ 00:19:13.854 START TEST nvmf_queue_depth 00:19:13.854 ************************************ 00:19:13.854 13:49:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:13.854 * Looking for test storage... 00:19:13.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.854 13:49:15 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.854 13:49:15 -- nvmf/common.sh@7 -- # uname -s 00:19:13.854 13:49:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.854 13:49:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.854 13:49:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.854 13:49:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.854 13:49:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.854 13:49:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.854 13:49:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.854 13:49:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.854 13:49:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.854 13:49:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.854 13:49:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:13.854 13:49:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:13.854 13:49:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.854 13:49:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.854 13:49:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.854 13:49:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.854 13:49:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.854 13:49:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.854 13:49:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.854 13:49:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.854 13:49:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.854 13:49:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.854 13:49:15 -- paths/export.sh@5 -- # export PATH 00:19:13.854 13:49:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.854 13:49:15 -- nvmf/common.sh@46 -- # : 0 00:19:13.854 13:49:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:13.854 13:49:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:13.854 13:49:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:13.854 13:49:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.854 13:49:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.854 13:49:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:13.854 13:49:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:13.854 13:49:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:13.854 13:49:15 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:13.854 13:49:15 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:13.854 13:49:15 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.854 13:49:15 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:13.854 13:49:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:13.854 13:49:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.854 13:49:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:13.854 13:49:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:13.854 13:49:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:13.854 13:49:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.854 13:49:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:13.854 13:49:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.854 13:49:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:13.854 13:49:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:13.854 13:49:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:13.854 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:19:19.128 13:49:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:19.128 13:49:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:19.128 13:49:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:19.128 13:49:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:19.128 13:49:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:19.128 13:49:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:19.128 13:49:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:19.128 13:49:20 -- nvmf/common.sh@294 -- # net_devs=() 00:19:19.128 13:49:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:19.128 13:49:20 -- nvmf/common.sh@295 -- # e810=() 00:19:19.128 13:49:20 -- nvmf/common.sh@295 -- # local -ga e810 00:19:19.128 13:49:20 -- nvmf/common.sh@296 -- # x722=() 00:19:19.128 13:49:20 -- nvmf/common.sh@296 -- # local -ga x722 00:19:19.128 13:49:20 -- nvmf/common.sh@297 -- # mlx=() 00:19:19.128 13:49:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:19.128 13:49:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.128 13:49:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.128 13:49:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.128 13:49:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.128 13:49:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.128 13:49:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.129 13:49:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.129 13:49:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.129 13:49:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.129 13:49:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.129 13:49:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.129 13:49:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:19.129 13:49:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:19.129 13:49:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:19.129 13:49:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:19.129 13:49:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:19.129 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:19.129 13:49:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:19.129 13:49:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:19.129 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:19.129 13:49:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:19.129 13:49:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:19.129 13:49:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.129 13:49:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:19.129 13:49:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.129 13:49:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:19.129 Found net devices under 0000:86:00.0: cvl_0_0 00:19:19.129 13:49:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.129 13:49:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:19.129 13:49:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.129 13:49:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:19.129 13:49:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.129 13:49:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:19.129 Found net devices under 0000:86:00.1: cvl_0_1 00:19:19.129 13:49:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.129 13:49:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:19.129 13:49:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:19.129 13:49:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:19.129 13:49:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:19.129 13:49:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.129 13:49:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.129 13:49:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.129 13:49:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:19.129 13:49:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.129 13:49:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.129 13:49:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:19.129 13:49:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.129 13:49:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.129 13:49:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:19.129 13:49:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:19.129 13:49:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.129 13:49:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.129 13:49:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.129 13:49:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.129 13:49:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:19.129 13:49:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.129 13:49:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.129 13:49:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.129 13:49:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:19.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:19:19.129 00:19:19.129 --- 10.0.0.2 ping statistics --- 00:19:19.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.129 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:19:19.129 13:49:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:19:19.129 00:19:19.129 --- 10.0.0.1 ping statistics --- 00:19:19.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.129 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:19:19.129 13:49:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.129 13:49:21 -- nvmf/common.sh@410 -- # return 0 00:19:19.129 13:49:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:19.129 13:49:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.129 13:49:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:19.129 13:49:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:19.129 13:49:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.129 13:49:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:19.129 13:49:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:19.129 13:49:21 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:19.129 13:49:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:19.129 13:49:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:19.129 13:49:21 -- common/autotest_common.sh@10 -- # set +x 00:19:19.129 13:49:21 -- nvmf/common.sh@469 -- # nvmfpid=1597753 00:19:19.129 13:49:21 -- nvmf/common.sh@470 -- # waitforlisten 1597753 00:19:19.129 13:49:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:19.129 13:49:21 -- common/autotest_common.sh@819 -- # '[' -z 1597753 ']' 00:19:19.129 13:49:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.129 13:49:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:19.129 13:49:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.129 13:49:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:19.129 13:49:21 -- common/autotest_common.sh@10 -- # set +x 00:19:19.129 [2024-07-11 13:49:21.207569] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:19.129 [2024-07-11 13:49:21.207612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.129 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.129 [2024-07-11 13:49:21.266423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.129 [2024-07-11 13:49:21.304677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:19.129 [2024-07-11 13:49:21.304788] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.129 [2024-07-11 13:49:21.304797] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.129 [2024-07-11 13:49:21.304803] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.129 [2024-07-11 13:49:21.304821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.697 13:49:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:19.697 13:49:21 -- common/autotest_common.sh@852 -- # return 0 00:19:19.697 13:49:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:19.698 13:49:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:19.698 13:49:21 -- common/autotest_common.sh@10 -- # set +x 00:19:19.698 13:49:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.698 13:49:22 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:19.698 13:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.698 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:19:19.698 [2024-07-11 13:49:22.038251] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.698 13:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.698 13:49:22 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:19.698 13:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.698 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:19:19.698 Malloc0 00:19:19.698 13:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.698 13:49:22 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:19.698 13:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.698 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:19:19.698 13:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.698 13:49:22 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:19.698 13:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.698 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:19:19.698 13:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.698 13:49:22 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.698 13:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.698 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:19:19.698 [2024-07-11 13:49:22.103475] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.698 13:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.698 13:49:22 -- target/queue_depth.sh@30 -- # bdevperf_pid=1597925 00:19:19.698 13:49:22 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:19.698 13:49:22 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:19.698 13:49:22 -- target/queue_depth.sh@33 -- # waitforlisten 1597925 /var/tmp/bdevperf.sock 00:19:19.698 13:49:22 -- common/autotest_common.sh@819 -- # '[' -z 1597925 ']' 00:19:19.698 13:49:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.698 13:49:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:19.698 13:49:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.698 13:49:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:19.698 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:19:19.698 [2024-07-11 13:49:22.151464] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:19.698 [2024-07-11 13:49:22.151508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597925 ] 00:19:19.956 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.956 [2024-07-11 13:49:22.202976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.956 [2024-07-11 13:49:22.240370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.524 13:49:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:20.524 13:49:22 -- common/autotest_common.sh@852 -- # return 0 00:19:20.524 13:49:22 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:20.524 13:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:20.524 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:19:20.783 NVMe0n1 00:19:20.783 13:49:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:20.783 13:49:23 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:20.783 Running I/O for 10 seconds... 00:19:30.792 00:19:30.793 Latency(us) 00:19:30.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.793 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:30.793 Verification LBA range: start 0x0 length 0x4000 00:19:30.793 NVMe0n1 : 10.05 18396.32 71.86 0.00 0.00 55499.45 11112.63 40803.28 00:19:30.793 =================================================================================================================== 00:19:30.793 Total : 18396.32 71.86 0.00 0.00 55499.45 11112.63 40803.28 00:19:30.793 0 00:19:30.793 13:49:33 -- target/queue_depth.sh@39 -- # killprocess 1597925 00:19:30.793 13:49:33 -- common/autotest_common.sh@926 -- # '[' -z 1597925 ']' 00:19:30.793 13:49:33 -- common/autotest_common.sh@930 -- # kill -0 1597925 00:19:30.793 13:49:33 -- common/autotest_common.sh@931 -- # uname 00:19:30.793 13:49:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:30.793 13:49:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1597925 00:19:31.051 13:49:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:31.051 13:49:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:31.051 13:49:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1597925' 00:19:31.051 killing process with pid 1597925 00:19:31.051 13:49:33 -- common/autotest_common.sh@945 -- # kill 1597925 00:19:31.051 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.051 00:19:31.051 Latency(us) 00:19:31.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.051 =================================================================================================================== 00:19:31.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.051 13:49:33 -- common/autotest_common.sh@950 -- # wait 1597925 00:19:31.051 13:49:33 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:31.051 13:49:33 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:31.051 13:49:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:31.051 13:49:33 -- nvmf/common.sh@116 -- # sync 00:19:31.051 13:49:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:31.051 13:49:33 -- nvmf/common.sh@119 -- # set +e 00:19:31.051 13:49:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:31.051 13:49:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:31.051 rmmod nvme_tcp 00:19:31.051 rmmod nvme_fabrics 00:19:31.051 rmmod nvme_keyring 00:19:31.051 13:49:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:31.051 13:49:33 -- nvmf/common.sh@123 -- # set -e 00:19:31.051 13:49:33 -- nvmf/common.sh@124 -- # return 0 00:19:31.051 13:49:33 -- nvmf/common.sh@477 -- # '[' -n 1597753 ']' 00:19:31.051 13:49:33 -- nvmf/common.sh@478 -- # killprocess 1597753 00:19:31.051 13:49:33 -- common/autotest_common.sh@926 -- # '[' -z 1597753 ']' 00:19:31.051 13:49:33 -- common/autotest_common.sh@930 -- # kill -0 1597753 00:19:31.051 13:49:33 -- common/autotest_common.sh@931 -- # uname 00:19:31.051 13:49:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:31.051 13:49:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1597753 00:19:31.310 13:49:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:31.310 13:49:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:31.310 13:49:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1597753' 00:19:31.310 killing process with pid 1597753 00:19:31.310 13:49:33 -- common/autotest_common.sh@945 -- # kill 1597753 00:19:31.310 13:49:33 -- common/autotest_common.sh@950 -- # wait 1597753 00:19:31.310 13:49:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:31.310 13:49:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:31.310 13:49:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:31.310 13:49:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.310 13:49:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:31.310 13:49:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.310 13:49:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.310 13:49:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.845 13:49:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:33.845 00:19:33.845 real 0m19.989s 00:19:33.845 user 0m24.445s 00:19:33.845 sys 0m5.649s 00:19:33.845 13:49:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:33.845 13:49:35 -- common/autotest_common.sh@10 -- # set +x 00:19:33.845 ************************************ 00:19:33.845 END TEST nvmf_queue_depth 00:19:33.845 ************************************ 00:19:33.845 13:49:35 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:33.845 13:49:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:33.845 13:49:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:33.845 13:49:35 -- common/autotest_common.sh@10 -- # set +x 00:19:33.845 ************************************ 00:19:33.845 START TEST nvmf_multipath 00:19:33.845 ************************************ 00:19:33.845 13:49:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:33.845 * Looking for test storage... 00:19:33.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.845 13:49:35 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.845 13:49:35 -- nvmf/common.sh@7 -- # uname -s 00:19:33.845 13:49:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.845 13:49:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.845 13:49:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.845 13:49:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.845 13:49:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.845 13:49:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.845 13:49:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.845 13:49:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.845 13:49:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.845 13:49:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.845 13:49:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.845 13:49:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.845 13:49:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.845 13:49:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.845 13:49:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.845 13:49:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.845 13:49:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.845 13:49:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.845 13:49:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.845 13:49:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.845 13:49:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.845 13:49:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.845 13:49:35 -- paths/export.sh@5 -- # export PATH 00:19:33.845 13:49:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.845 13:49:35 -- nvmf/common.sh@46 -- # : 0 00:19:33.845 13:49:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:33.845 13:49:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:33.845 13:49:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:33.845 13:49:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.845 13:49:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.845 13:49:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:33.845 13:49:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:33.845 13:49:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:33.845 13:49:35 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:33.845 13:49:35 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:33.845 13:49:35 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:33.845 13:49:35 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:33.845 13:49:35 -- target/multipath.sh@43 -- # nvmftestinit 00:19:33.845 13:49:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:33.845 13:49:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.845 13:49:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:33.845 13:49:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:33.845 13:49:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:33.845 13:49:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.845 13:49:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.845 13:49:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.845 13:49:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:33.845 13:49:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:33.845 13:49:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:33.845 13:49:35 -- common/autotest_common.sh@10 -- # set +x 00:19:39.118 13:49:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:39.118 13:49:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:39.118 13:49:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:39.118 13:49:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:39.118 13:49:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:39.118 13:49:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:39.118 13:49:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:39.118 13:49:40 -- nvmf/common.sh@294 -- # net_devs=() 00:19:39.118 13:49:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:39.118 13:49:40 -- nvmf/common.sh@295 -- # e810=() 00:19:39.118 13:49:40 -- nvmf/common.sh@295 -- # local -ga e810 00:19:39.118 13:49:40 -- nvmf/common.sh@296 -- # x722=() 00:19:39.118 13:49:40 -- nvmf/common.sh@296 -- # local -ga x722 00:19:39.118 13:49:40 -- nvmf/common.sh@297 -- # mlx=() 00:19:39.118 13:49:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:39.118 13:49:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.118 13:49:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.118 13:49:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.118 13:49:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.118 13:49:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.118 13:49:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.118 13:49:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.118 13:49:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.118 13:49:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.118 13:49:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.118 13:49:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.118 13:49:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:39.118 13:49:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:39.118 13:49:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:39.118 13:49:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:39.118 13:49:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:39.118 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:39.118 13:49:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:39.118 13:49:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:39.118 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:39.118 13:49:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:39.118 13:49:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:39.118 13:49:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.118 13:49:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:39.118 13:49:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.118 13:49:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:39.118 Found net devices under 0000:86:00.0: cvl_0_0 00:19:39.118 13:49:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.118 13:49:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:39.118 13:49:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.118 13:49:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:39.118 13:49:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.118 13:49:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:39.118 Found net devices under 0000:86:00.1: cvl_0_1 00:19:39.118 13:49:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.118 13:49:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:39.118 13:49:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:39.118 13:49:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:39.118 13:49:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:39.118 13:49:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.118 13:49:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.118 13:49:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.118 13:49:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:39.118 13:49:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.118 13:49:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.118 13:49:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:39.118 13:49:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.118 13:49:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.118 13:49:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:39.118 13:49:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:39.118 13:49:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.118 13:49:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.118 13:49:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.118 13:49:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.118 13:49:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:39.118 13:49:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.118 13:49:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.118 13:49:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.118 13:49:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:39.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:19:39.119 00:19:39.119 --- 10.0.0.2 ping statistics --- 00:19:39.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.119 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:19:39.119 13:49:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:19:39.119 00:19:39.119 --- 10.0.0.1 ping statistics --- 00:19:39.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.119 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:19:39.119 13:49:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.119 13:49:41 -- nvmf/common.sh@410 -- # return 0 00:19:39.119 13:49:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:39.119 13:49:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.119 13:49:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:39.119 13:49:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:39.119 13:49:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.119 13:49:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:39.119 13:49:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:39.119 13:49:41 -- target/multipath.sh@45 -- # '[' -z ']' 00:19:39.119 13:49:41 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:39.119 only one NIC for nvmf test 00:19:39.119 13:49:41 -- target/multipath.sh@47 -- # nvmftestfini 00:19:39.119 13:49:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:39.119 13:49:41 -- nvmf/common.sh@116 -- # sync 00:19:39.119 13:49:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:39.119 13:49:41 -- nvmf/common.sh@119 -- # set +e 00:19:39.119 13:49:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:39.119 13:49:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:39.119 rmmod nvme_tcp 00:19:39.119 rmmod nvme_fabrics 00:19:39.119 rmmod nvme_keyring 00:19:39.119 13:49:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:39.119 13:49:41 -- nvmf/common.sh@123 -- # set -e 00:19:39.119 13:49:41 -- nvmf/common.sh@124 -- # return 0 00:19:39.119 13:49:41 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:39.119 13:49:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:39.119 13:49:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:39.119 13:49:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:39.119 13:49:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.119 13:49:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:39.119 13:49:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.119 13:49:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.119 13:49:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.026 13:49:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:41.026 13:49:43 -- target/multipath.sh@48 -- # exit 0 00:19:41.026 13:49:43 -- target/multipath.sh@1 -- # nvmftestfini 00:19:41.026 13:49:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:41.026 13:49:43 -- nvmf/common.sh@116 -- # sync 00:19:41.026 13:49:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:41.026 13:49:43 -- nvmf/common.sh@119 -- # set +e 00:19:41.026 13:49:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:41.026 13:49:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:41.026 13:49:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:41.026 13:49:43 -- nvmf/common.sh@123 -- # set -e 00:19:41.026 13:49:43 -- nvmf/common.sh@124 -- # return 0 00:19:41.026 13:49:43 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:41.026 13:49:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:41.026 13:49:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:41.026 13:49:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:41.026 13:49:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.026 13:49:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:41.026 13:49:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.026 13:49:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.026 13:49:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.026 13:49:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:41.026 00:19:41.026 real 0m7.393s 00:19:41.026 user 0m1.442s 00:19:41.026 sys 0m3.892s 00:19:41.026 13:49:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.026 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 ************************************ 00:19:41.026 END TEST nvmf_multipath 00:19:41.026 ************************************ 00:19:41.026 13:49:43 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:41.026 13:49:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:41.026 13:49:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:41.026 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 ************************************ 00:19:41.026 START TEST nvmf_zcopy 00:19:41.026 ************************************ 00:19:41.026 13:49:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:41.026 * Looking for test storage... 00:19:41.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.026 13:49:43 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.026 13:49:43 -- nvmf/common.sh@7 -- # uname -s 00:19:41.026 13:49:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.026 13:49:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.026 13:49:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.026 13:49:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.026 13:49:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.026 13:49:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.026 13:49:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.026 13:49:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.026 13:49:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.026 13:49:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.026 13:49:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.026 13:49:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.026 13:49:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.026 13:49:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.026 13:49:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.026 13:49:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.026 13:49:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.026 13:49:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.026 13:49:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.026 13:49:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.026 13:49:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.026 13:49:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.026 13:49:43 -- paths/export.sh@5 -- # export PATH 00:19:41.026 13:49:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.026 13:49:43 -- nvmf/common.sh@46 -- # : 0 00:19:41.026 13:49:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:41.026 13:49:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:41.026 13:49:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:41.026 13:49:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.026 13:49:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.026 13:49:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:41.026 13:49:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:41.026 13:49:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:41.026 13:49:43 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:41.026 13:49:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:41.026 13:49:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.026 13:49:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:41.026 13:49:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:41.026 13:49:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:41.026 13:49:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.026 13:49:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.026 13:49:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.026 13:49:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:41.026 13:49:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:41.026 13:49:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:41.026 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:19:46.300 13:49:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:46.300 13:49:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:46.300 13:49:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:46.300 13:49:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:46.300 13:49:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:46.300 13:49:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:46.300 13:49:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:46.300 13:49:48 -- nvmf/common.sh@294 -- # net_devs=() 00:19:46.300 13:49:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:46.300 13:49:48 -- nvmf/common.sh@295 -- # e810=() 00:19:46.300 13:49:48 -- nvmf/common.sh@295 -- # local -ga e810 00:19:46.300 13:49:48 -- nvmf/common.sh@296 -- # x722=() 00:19:46.300 13:49:48 -- nvmf/common.sh@296 -- # local -ga x722 00:19:46.300 13:49:48 -- nvmf/common.sh@297 -- # mlx=() 00:19:46.300 13:49:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:46.300 13:49:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.300 13:49:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.300 13:49:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.300 13:49:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.300 13:49:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.300 13:49:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.300 13:49:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.300 13:49:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.300 13:49:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.300 13:49:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.300 13:49:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.300 13:49:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:46.300 13:49:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:46.300 13:49:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:46.300 13:49:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:46.300 13:49:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:46.300 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:46.300 13:49:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:46.300 13:49:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:46.300 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:46.300 13:49:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:46.300 13:49:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:46.300 13:49:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.300 13:49:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:46.300 13:49:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.300 13:49:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:46.300 Found net devices under 0000:86:00.0: cvl_0_0 00:19:46.300 13:49:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.300 13:49:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:46.300 13:49:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.300 13:49:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:46.300 13:49:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.300 13:49:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:46.300 Found net devices under 0000:86:00.1: cvl_0_1 00:19:46.300 13:49:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.300 13:49:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:46.300 13:49:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:46.300 13:49:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:46.300 13:49:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.300 13:49:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.300 13:49:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.300 13:49:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:46.300 13:49:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.300 13:49:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.300 13:49:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:46.300 13:49:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.300 13:49:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.300 13:49:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:46.300 13:49:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:46.300 13:49:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.300 13:49:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.300 13:49:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.300 13:49:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.300 13:49:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:46.300 13:49:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.300 13:49:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.300 13:49:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.300 13:49:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:46.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:19:46.300 00:19:46.300 --- 10.0.0.2 ping statistics --- 00:19:46.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.300 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:19:46.300 13:49:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:19:46.300 00:19:46.300 --- 10.0.0.1 ping statistics --- 00:19:46.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.300 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:19:46.300 13:49:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.300 13:49:48 -- nvmf/common.sh@410 -- # return 0 00:19:46.300 13:49:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:46.300 13:49:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.300 13:49:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:46.300 13:49:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.300 13:49:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:46.300 13:49:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:46.300 13:49:48 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:46.300 13:49:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:46.300 13:49:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:46.300 13:49:48 -- common/autotest_common.sh@10 -- # set +x 00:19:46.300 13:49:48 -- nvmf/common.sh@469 -- # nvmfpid=1606634 00:19:46.300 13:49:48 -- nvmf/common.sh@470 -- # waitforlisten 1606634 00:19:46.300 13:49:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:46.300 13:49:48 -- common/autotest_common.sh@819 -- # '[' -z 1606634 ']' 00:19:46.300 13:49:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.300 13:49:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:46.300 13:49:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.300 13:49:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:46.300 13:49:48 -- common/autotest_common.sh@10 -- # set +x 00:19:46.300 [2024-07-11 13:49:48.748585] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:46.300 [2024-07-11 13:49:48.748631] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.559 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.559 [2024-07-11 13:49:48.807233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.559 [2024-07-11 13:49:48.844647] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:46.559 [2024-07-11 13:49:48.844758] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.559 [2024-07-11 13:49:48.844766] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.559 [2024-07-11 13:49:48.844773] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.559 [2024-07-11 13:49:48.844795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.127 13:49:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:47.127 13:49:49 -- common/autotest_common.sh@852 -- # return 0 00:19:47.127 13:49:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:47.127 13:49:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:47.127 13:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.127 13:49:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.127 13:49:49 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:47.127 13:49:49 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:47.127 13:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.127 13:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.127 [2024-07-11 13:49:49.573520] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.127 13:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.127 13:49:49 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:47.127 13:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.127 13:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.386 13:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.386 13:49:49 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.386 13:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.386 13:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.386 [2024-07-11 13:49:49.593683] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.386 13:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.386 13:49:49 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:47.386 13:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.386 13:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.386 13:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.386 13:49:49 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:47.386 13:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.386 13:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.386 malloc0 00:19:47.386 13:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.386 13:49:49 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:47.386 13:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.386 13:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:47.386 13:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.386 13:49:49 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:47.386 13:49:49 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:47.386 13:49:49 -- nvmf/common.sh@520 -- # config=() 00:19:47.386 13:49:49 -- nvmf/common.sh@520 -- # local subsystem config 00:19:47.386 13:49:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:47.386 13:49:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:47.386 { 00:19:47.386 "params": { 00:19:47.386 "name": "Nvme$subsystem", 00:19:47.386 "trtype": "$TEST_TRANSPORT", 00:19:47.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.386 "adrfam": "ipv4", 00:19:47.386 "trsvcid": "$NVMF_PORT", 00:19:47.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.386 "hdgst": ${hdgst:-false}, 00:19:47.386 "ddgst": ${ddgst:-false} 00:19:47.386 }, 00:19:47.386 "method": "bdev_nvme_attach_controller" 00:19:47.386 } 00:19:47.386 EOF 00:19:47.386 )") 00:19:47.386 13:49:49 -- nvmf/common.sh@542 -- # cat 00:19:47.386 13:49:49 -- nvmf/common.sh@544 -- # jq . 00:19:47.386 13:49:49 -- nvmf/common.sh@545 -- # IFS=, 00:19:47.386 13:49:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:47.386 "params": { 00:19:47.386 "name": "Nvme1", 00:19:47.386 "trtype": "tcp", 00:19:47.386 "traddr": "10.0.0.2", 00:19:47.386 "adrfam": "ipv4", 00:19:47.386 "trsvcid": "4420", 00:19:47.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.386 "hdgst": false, 00:19:47.386 "ddgst": false 00:19:47.386 }, 00:19:47.386 "method": "bdev_nvme_attach_controller" 00:19:47.386 }' 00:19:47.386 [2024-07-11 13:49:49.674605] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:47.386 [2024-07-11 13:49:49.674649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606881 ] 00:19:47.386 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.386 [2024-07-11 13:49:49.727813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.386 [2024-07-11 13:49:49.765614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.648 Running I/O for 10 seconds... 00:19:57.624 00:19:57.624 Latency(us) 00:19:57.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.624 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:57.624 Verification LBA range: start 0x0 length 0x1000 00:19:57.624 Nvme1n1 : 10.01 13039.80 101.87 0.00 0.00 9792.92 1032.90 19375.86 00:19:57.624 =================================================================================================================== 00:19:57.624 Total : 13039.80 101.87 0.00 0.00 9792.92 1032.90 19375.86 00:19:57.884 13:50:00 -- target/zcopy.sh@39 -- # perfpid=1608530 00:19:57.884 13:50:00 -- target/zcopy.sh@41 -- # xtrace_disable 00:19:57.884 13:50:00 -- common/autotest_common.sh@10 -- # set +x 00:19:57.884 13:50:00 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:57.884 13:50:00 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:57.884 13:50:00 -- nvmf/common.sh@520 -- # config=() 00:19:57.884 13:50:00 -- nvmf/common.sh@520 -- # local subsystem config 00:19:57.884 13:50:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:57.884 13:50:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:57.884 { 00:19:57.884 "params": { 00:19:57.884 "name": "Nvme$subsystem", 00:19:57.884 "trtype": "$TEST_TRANSPORT", 00:19:57.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.884 "adrfam": "ipv4", 00:19:57.884 "trsvcid": "$NVMF_PORT", 00:19:57.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.884 "hdgst": ${hdgst:-false}, 00:19:57.884 "ddgst": ${ddgst:-false} 00:19:57.884 }, 00:19:57.884 "method": "bdev_nvme_attach_controller" 00:19:57.884 } 00:19:57.884 EOF 00:19:57.884 )") 00:19:57.884 13:50:00 -- nvmf/common.sh@542 -- # cat 00:19:57.884 13:50:00 -- nvmf/common.sh@544 -- # jq . 00:19:57.884 [2024-07-11 13:50:00.164839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.164875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 13:50:00 -- nvmf/common.sh@545 -- # IFS=, 00:19:57.884 13:50:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:57.884 "params": { 00:19:57.884 "name": "Nvme1", 00:19:57.884 "trtype": "tcp", 00:19:57.884 "traddr": "10.0.0.2", 00:19:57.884 "adrfam": "ipv4", 00:19:57.884 "trsvcid": "4420", 00:19:57.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.884 "hdgst": false, 00:19:57.884 "ddgst": false 00:19:57.884 }, 00:19:57.884 "method": "bdev_nvme_attach_controller" 00:19:57.884 }' 00:19:57.884 [2024-07-11 13:50:00.176832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.176849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.184850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.184861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.192873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.192883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.200892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.200902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.201532] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:57.884 [2024-07-11 13:50:00.201582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608530 ] 00:19:57.884 [2024-07-11 13:50:00.208914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.208927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.220945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.220956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.884 [2024-07-11 13:50:00.228966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.228976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.236987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.236997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.245009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.245018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.253030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.253041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.256568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.884 [2024-07-11 13:50:00.265066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.265080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.273085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.273096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.285123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.285145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.293001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.884 [2024-07-11 13:50:00.293141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.293151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.301169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.301181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.313214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.313234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.321221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.321239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.329240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.329252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:57.884 [2024-07-11 13:50:00.337259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:57.884 [2024-07-11 13:50:00.337270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.345280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.345291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.357317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.357329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.369370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.369391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.377373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.377386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.385393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.385407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.393417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.393433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.405445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.405456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.413465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.413477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.421488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.421498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.429509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.429519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.437535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.437549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.449568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.449582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.457587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.457598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.465608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.465618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.473631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.473641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.481654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.481664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.493695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.493709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.501717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.501729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.509736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.509746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.517758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.517770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.525780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.525790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.537818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.537832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.151 [2024-07-11 13:50:00.545838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.151 [2024-07-11 13:50:00.545849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.152 [2024-07-11 13:50:00.553859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.152 [2024-07-11 13:50:00.553870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.152 [2024-07-11 13:50:00.561883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.152 [2024-07-11 13:50:00.561893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.152 [2024-07-11 13:50:00.569903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.152 [2024-07-11 13:50:00.569913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.152 [2024-07-11 13:50:00.581938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.152 [2024-07-11 13:50:00.581950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.152 [2024-07-11 13:50:00.589966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.152 [2024-07-11 13:50:00.589984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.152 [2024-07-11 13:50:00.598000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.152 [2024-07-11 13:50:00.598010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.152 Running I/O for 5 seconds... 00:19:58.460 [2024-07-11 13:50:00.606008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.460 [2024-07-11 13:50:00.606020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.460 [2024-07-11 13:50:00.618104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.460 [2024-07-11 13:50:00.618125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.460 [2024-07-11 13:50:00.626561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.460 [2024-07-11 13:50:00.626581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.460 [2024-07-11 13:50:00.635759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.460 [2024-07-11 13:50:00.635778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.460 [2024-07-11 13:50:00.644861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.460 [2024-07-11 13:50:00.644880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.460 [2024-07-11 13:50:00.653956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.460 [2024-07-11 13:50:00.653975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.460 [2024-07-11 13:50:00.662982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.460 [2024-07-11 13:50:00.663001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.460 [2024-07-11 13:50:00.677043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.460 [2024-07-11 13:50:00.677065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.460 [2024-07-11 13:50:00.684252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.460 [2024-07-11 13:50:00.684271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.460 [2024-07-11 13:50:00.693854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.693873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.702678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.702697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.711437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.711457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.720305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.720323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.728624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.728642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.737198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.737217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.746496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.746514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.754948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.754967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.768907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.768927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.777707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.777727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.787491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.787510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.796091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.796110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.804459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.804476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.818050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.818071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.826924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.826944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.835489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.835507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.844298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.844317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.853323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.853341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.862151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.862175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.870499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.870517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.879339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.879357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.888108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.888126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.896836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.896854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.905985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.906003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.461 [2024-07-11 13:50:00.914932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.461 [2024-07-11 13:50:00.914951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:00.923809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:00.923828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:00.932733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:00.932751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:00.941905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:00.941923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:00.951214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:00.951232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:00.959466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:00.959484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:00.968048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:00.968066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:00.977337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:00.977356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:00.986300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:00.986318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.000411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.000429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.007645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.007666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.016664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.016682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.025004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.025022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.033900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.033918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.042693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.042711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.051751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.051769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.060939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.060957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.069640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.069658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.078680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.078698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.087585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.087603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.096473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.096491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.105455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.105473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.114468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.114487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.123800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.123819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.132824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.132842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.141572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.141589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.150303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.150321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.158509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.158527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.720 [2024-07-11 13:50:01.168125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.720 [2024-07-11 13:50:01.168143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.182247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.182271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.190982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.191000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.199261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.199279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.207545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.207563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.216671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.216689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.225692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.225712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.234961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.234981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.243492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.243510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.252468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.252485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.261477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.261495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.270417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.270435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.278690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.278707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.287895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.287913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.296747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.296765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.305686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.305704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.319628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.319648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.328332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.328351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.336584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.336603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.345326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.345344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.354093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.354120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.363515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.363535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.372307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.372325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.380997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.381014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.389971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.389989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.398390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.398408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.407719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.407737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.416118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.416135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.424919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.424937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.980 [2024-07-11 13:50:01.433753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:58.980 [2024-07-11 13:50:01.433772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.442036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.442055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.455970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.455989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.463177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.463196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.472778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.472796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.481425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.481442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.489743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.489761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.503369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.503387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.510754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.510772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.520630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.520648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.528995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.529018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.537854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.537872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.547430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.547448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.556082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.239 [2024-07-11 13:50:01.556099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.239 [2024-07-11 13:50:01.564767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.564785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.573018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.573036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.582104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.582122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.590574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.590592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.599466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.599484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.608266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.608284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.617201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.617220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.625861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.625879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.634578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.634596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.643615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.643633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.652623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.652641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.661461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.661479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.670370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.670388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.684433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.684452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.240 [2024-07-11 13:50:01.693234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.240 [2024-07-11 13:50:01.693252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.701680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.701699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.710733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.710751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.719656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.719674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.733580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.733598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.742461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.742479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.750949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.750968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.759740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.759758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.768678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.768698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.778102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.778122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.786626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.786644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.795118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.795139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.804275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.804295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.813173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.813192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.827510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.827531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.834858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.834877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.842184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.842202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.851871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.851889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.861169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.861188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.874700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.874720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.881920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.881939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.891773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.891792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.900398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.900416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.907000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.907019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.921883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.921902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.930414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.930432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.939283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.939303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.499 [2024-07-11 13:50:01.948615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.499 [2024-07-11 13:50:01.948634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:01.957335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:01.957356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:01.966149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:01.966175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:01.974328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:01.974346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:01.983119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:01.983137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:01.991847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:01.991866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.001006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.001025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.015372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.015392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.023906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.023925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.032590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.032610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.041936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.041956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.050575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.050593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.060278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.060299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.068974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.068993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.077761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.077780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.086595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.086613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.094559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.094577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.108381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.108400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.117054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.117073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.126485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.126504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.135184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.135203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.143948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.143966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.152661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.152680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.160868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.160886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.169591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.169609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.178455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.178474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.187184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.187203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.200851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.200870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:59.758 [2024-07-11 13:50:02.209330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:59.758 [2024-07-11 13:50:02.209348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.217580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.217599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.226369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.226387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.234845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.234863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.243568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.243586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.251723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.251742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.260251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.260269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.269199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.269218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.277796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.277814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.286124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.286142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.294981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.294999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.303819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.303838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.312845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.312864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.321785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.321804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.330961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.330982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.339844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.339864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.348898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.348916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.358013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.358031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.366775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.366793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.375674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.375692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.384999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.017 [2024-07-11 13:50:02.385017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.017 [2024-07-11 13:50:02.393662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.018 [2024-07-11 13:50:02.393684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.018 [2024-07-11 13:50:02.402485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.018 [2024-07-11 13:50:02.402503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.018 [2024-07-11 13:50:02.411270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.018 [2024-07-11 13:50:02.411288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.018 [2024-07-11 13:50:02.425358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.018 [2024-07-11 13:50:02.425378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.018 [2024-07-11 13:50:02.434060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.018 [2024-07-11 13:50:02.434079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.018 [2024-07-11 13:50:02.442996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.018 [2024-07-11 13:50:02.443014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.018 [2024-07-11 13:50:02.451674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.018 [2024-07-11 13:50:02.451692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.018 [2024-07-11 13:50:02.460403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.018 [2024-07-11 13:50:02.460421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.018 [2024-07-11 13:50:02.469633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.018 [2024-07-11 13:50:02.469651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.477732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.477751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.486847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.486866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.494764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.494782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.503641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.503659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.512413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.512430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.520767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.520785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.529437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.529454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.537904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.537922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.546879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.546897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.560955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.560973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.569608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.569630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.578421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.578441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.587361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.587379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.596090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.596108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.605046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.605064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.613449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.613466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.621753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.621773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.631280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.631298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.639844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.639863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.653786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.653805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.662461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.662480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.670775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.670793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.679762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.679779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.688072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.688089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.702356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.702375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.709560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.709578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.718166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.718185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.277 [2024-07-11 13:50:02.726387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.277 [2024-07-11 13:50:02.726406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.734946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.734965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.743856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.743878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.752565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.752584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.759211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.759228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.769605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.769624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.778097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.778115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.787443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.787461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.796061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.796079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.804527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.804545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.813397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.813415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.822451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.822469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.831472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.831490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.840468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.840486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.849741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.849758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.858216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.858234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.866710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.866727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.875805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.875823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.884106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.884124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.892298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.892316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.901084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.901101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.536 [2024-07-11 13:50:02.909688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.536 [2024-07-11 13:50:02.909710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.537 [2024-07-11 13:50:02.918536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.537 [2024-07-11 13:50:02.918554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.537 [2024-07-11 13:50:02.927482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.537 [2024-07-11 13:50:02.927500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.537 [2024-07-11 13:50:02.936821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.537 [2024-07-11 13:50:02.936839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.537 [2024-07-11 13:50:02.945438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.537 [2024-07-11 13:50:02.945456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.537 [2024-07-11 13:50:02.954669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.537 [2024-07-11 13:50:02.954686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.537 [2024-07-11 13:50:02.968488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.537 [2024-07-11 13:50:02.968506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.537 [2024-07-11 13:50:02.976926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.537 [2024-07-11 13:50:02.976943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.537 [2024-07-11 13:50:02.985706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.537 [2024-07-11 13:50:02.985723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.795 [2024-07-11 13:50:02.993703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.795 [2024-07-11 13:50:02.993722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.795 [2024-07-11 13:50:03.003021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.795 [2024-07-11 13:50:03.003039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.795 [2024-07-11 13:50:03.011689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.795 [2024-07-11 13:50:03.011707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.795 [2024-07-11 13:50:03.020475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.795 [2024-07-11 13:50:03.020494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.795 [2024-07-11 13:50:03.028637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.795 [2024-07-11 13:50:03.028655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.795 [2024-07-11 13:50:03.037295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.795 [2024-07-11 13:50:03.037314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.795 [2024-07-11 13:50:03.046186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.046204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.060423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.060443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.069244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.069263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.078811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.078829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.085501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.085518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.095970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.095988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.109793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.109817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.118231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.118249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.126999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.127017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.135904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.135923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.144365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.144383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.153484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.153501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.162488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.162505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.171115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.171133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.180018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.180036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.188538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.188558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.202075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.202095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.209540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.209560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.218428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.218448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.227338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.227357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.236397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.236415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:00.796 [2024-07-11 13:50:03.245298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:00.796 [2024-07-11 13:50:03.245316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.054 [2024-07-11 13:50:03.253664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.054 [2024-07-11 13:50:03.253684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.261988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.262007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.271207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.271226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.279754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.279772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.288659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.288678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.297467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.297486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.306514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.306533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.316053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.316074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.324645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.324665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.338641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.338662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.347093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.347114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.355729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.355749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.364571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.364591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.373021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.373040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.382093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.382112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.388786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.388804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.399370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.399389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.408389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.408408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.417221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.417240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.426128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.426147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.434493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.434512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.443312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.443331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.451554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.451573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.460325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.460344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.469210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.469229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.477894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.477913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.486691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.486710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.495690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.495709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.055 [2024-07-11 13:50:03.505003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.055 [2024-07-11 13:50:03.505022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.514609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.514631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.522974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.522993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.531952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.531971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.540433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.540451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.548901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.548920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.557885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.557904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.566044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.566063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.574566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.574586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.583114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.583132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.591529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.591547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.600425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.600443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.609261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.609279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.618206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.618224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.627086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.627105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.635979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.635996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.645406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.645425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.653935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.653953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.662910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.662929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.671896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.671915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.680901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.680920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.689838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.689857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.698249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.698267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.704833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.704851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.715403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.715421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.722249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.722268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.737525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.737544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.746002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.746020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.754925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.754943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.313 [2024-07-11 13:50:03.763615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.313 [2024-07-11 13:50:03.763637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.772355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.772375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.780768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.780786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.789813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.789831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.799329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.799347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.807941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.807960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.817197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.817215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.826189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.826208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.834810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.834832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.843479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.843498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.851701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.851719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.860539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.860557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.869202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.869220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.878502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.878520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.886751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.886769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.895463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.895481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.904099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.904117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.913059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.913077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.922083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.922102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.930947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.930972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.939768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.939785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.948893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.948911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.962997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.963015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.971599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.971616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.980000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.980018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.988809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.988826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:03.997385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:03.997404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:04.006113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:04.006130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:04.014517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:04.014534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.572 [2024-07-11 13:50:04.023648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.572 [2024-07-11 13:50:04.023666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.032224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.032243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.040920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.040939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.049480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.049498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.058226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.058244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.066496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.066514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.075752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.075770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.084838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.084857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.093995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.094014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.100705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.100727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.110971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.110989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.119383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.119400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.128183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.128201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.137177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.137195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.146053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.146071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.154276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.154294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.162729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.831 [2024-07-11 13:50:04.162747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.831 [2024-07-11 13:50:04.171492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.832 [2024-07-11 13:50:04.171511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.832 [2024-07-11 13:50:04.185689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.832 [2024-07-11 13:50:04.185708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.832 [2024-07-11 13:50:04.194282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.832 [2024-07-11 13:50:04.194300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.832 [2024-07-11 13:50:04.203214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.832 [2024-07-11 13:50:04.203232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.832 [2024-07-11 13:50:04.212087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.832 [2024-07-11 13:50:04.212105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.832 [2024-07-11 13:50:04.220870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.832 [2024-07-11 13:50:04.220887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.832 [2024-07-11 13:50:04.229103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.832 [2024-07-11 13:50:04.229121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.832 [2024-07-11 13:50:04.238137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.832 [2024-07-11 13:50:04.238155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.832 [2024-07-11 13:50:04.246719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.832 [2024-07-11 13:50:04.246738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.832 [2024-07-11 13:50:04.255753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.832 [2024-07-11 13:50:04.255771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.832 [2024-07-11 13:50:04.264399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.832 [2024-07-11 13:50:04.264417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:01.832 [2024-07-11 13:50:04.278433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:01.832 [2024-07-11 13:50:04.278455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.287168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.287187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.296211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.296229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.305051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.305069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.314370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.314388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.332109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.332128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.340949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.340968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.349493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.349511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.358471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.358491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.367517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.367537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.376408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.376426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.385223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.385241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.394534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.394552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.402880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.402898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.416852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.416870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.425191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.425209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.433918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.433936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.442852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.442870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.451617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.451635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.465301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.465319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.473736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.091 [2024-07-11 13:50:04.473755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.091 [2024-07-11 13:50:04.482978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.092 [2024-07-11 13:50:04.482996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.092 [2024-07-11 13:50:04.492152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.092 [2024-07-11 13:50:04.492177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.092 [2024-07-11 13:50:04.501519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.092 [2024-07-11 13:50:04.501538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.092 [2024-07-11 13:50:04.510647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.092 [2024-07-11 13:50:04.510665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.092 [2024-07-11 13:50:04.519501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.092 [2024-07-11 13:50:04.519519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.092 [2024-07-11 13:50:04.528764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.092 [2024-07-11 13:50:04.528783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.092 [2024-07-11 13:50:04.537444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.092 [2024-07-11 13:50:04.537463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.092 [2024-07-11 13:50:04.545989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.092 [2024-07-11 13:50:04.546007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.351 [2024-07-11 13:50:04.560170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.351 [2024-07-11 13:50:04.560190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.351 [2024-07-11 13:50:04.567335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.351 [2024-07-11 13:50:04.567355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.351 [2024-07-11 13:50:04.577028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.351 [2024-07-11 13:50:04.577048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.351 [2024-07-11 13:50:04.585711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.351 [2024-07-11 13:50:04.585730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.351 [2024-07-11 13:50:04.594643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.351 [2024-07-11 13:50:04.594663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.351 [2024-07-11 13:50:04.603670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.351 [2024-07-11 13:50:04.603691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.351 [2024-07-11 13:50:04.612994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.351 [2024-07-11 13:50:04.613014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.351 [2024-07-11 13:50:04.621925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.351 [2024-07-11 13:50:04.621945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.351 [2024-07-11 13:50:04.631024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.351 [2024-07-11 13:50:04.631043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.351 [2024-07-11 13:50:04.639445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.351 [2024-07-11 13:50:04.639463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.351 [2024-07-11 13:50:04.648404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.351 [2024-07-11 13:50:04.648422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.351 [2024-07-11 13:50:04.657471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.657490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.666423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.666442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.674802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.674820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.683698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.683716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.698067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.698087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.706591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.706609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.714855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.714874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.723145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.723171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.731988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.732007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.740398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.740416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.749302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.749320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.758244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.758262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.767305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.767323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.776293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.776311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.785281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.785298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.794153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.794177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.352 [2024-07-11 13:50:04.803050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.352 [2024-07-11 13:50:04.803069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.811487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.811516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.819979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.819998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.828989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.829008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.837613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.837632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.846514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.846533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.855436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.855456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.864525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.864545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.873605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.873626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.880184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.880202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.890220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.890239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.899095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.899114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.907916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.907934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.917081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.917100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.925873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.925892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.934611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.934630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.943555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.943574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.611 [2024-07-11 13:50:04.952419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.611 [2024-07-11 13:50:04.952438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.612 [2024-07-11 13:50:04.966117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.612 [2024-07-11 13:50:04.966137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.612 [2024-07-11 13:50:04.974624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.612 [2024-07-11 13:50:04.974644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.612 [2024-07-11 13:50:04.983534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.612 [2024-07-11 13:50:04.983553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.612 [2024-07-11 13:50:04.992346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.612 [2024-07-11 13:50:04.992364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.612 [2024-07-11 13:50:05.001150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.612 [2024-07-11 13:50:05.001174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.612 [2024-07-11 13:50:05.009906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.612 [2024-07-11 13:50:05.009924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.612 [2024-07-11 13:50:05.018601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.612 [2024-07-11 13:50:05.018619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.612 [2024-07-11 13:50:05.026721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.612 [2024-07-11 13:50:05.026738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.612 [2024-07-11 13:50:05.035156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.612 [2024-07-11 13:50:05.035179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.612 [2024-07-11 13:50:05.043967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.612 [2024-07-11 13:50:05.043985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.612 [2024-07-11 13:50:05.052898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.612 [2024-07-11 13:50:05.052916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.612 [2024-07-11 13:50:05.061772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.612 [2024-07-11 13:50:05.061791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.071145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.071170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.080223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.080243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.089793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.089811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.103383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.103402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.112088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.112106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.121126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.121145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.130288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.130305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.139248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.139267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.148322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.148344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.157348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.157366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.166100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.166118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.174655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.174673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.183576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.183595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.197583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.197601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.206146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.206170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.215075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.215093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.224182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.224200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.232741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.232759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.241520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.241538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.250396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.250415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.258967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.258985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.267847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.267866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.276442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.276460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.285689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.285707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.294418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.294437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.303268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.303286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.311958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.871 [2024-07-11 13:50:05.311976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:02.871 [2024-07-11 13:50:05.320748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:02.872 [2024-07-11 13:50:05.320769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.329372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.329390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.338147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.338172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.346612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.346630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.355747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.355766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.364752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.364770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.378951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.378971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.387770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.387791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.396304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.396322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.405010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.405028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.413849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.413867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.427808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.427827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.436389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.436407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.444948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.444966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.453633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.453651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.462661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.462679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.471276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.471294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.480051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.480069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.488851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.488869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.498080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.498103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.506874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.506892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.520939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.520958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.529409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.529427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.538093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.538110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.546431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.546449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.555090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.555107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.563768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.563786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.572661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.572679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.131 [2024-07-11 13:50:05.581443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.131 [2024-07-11 13:50:05.581461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.590243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.590262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.599150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.599174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.608090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.608109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.615989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.616007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 00:20:03.390 Latency(us) 00:20:03.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.390 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:03.390 Nvme1n1 : 5.01 17463.28 136.43 0.00 0.00 7322.60 2635.69 16868.40 00:20:03.390 =================================================================================================================== 00:20:03.390 Total : 17463.28 136.43 0.00 0.00 7322.60 2635.69 16868.40 00:20:03.390 [2024-07-11 13:50:05.622623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.622640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.630640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.630655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.638661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.638676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.650709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.650728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.658717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.658730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.666740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.666754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.674760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.674773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.682782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.682794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.694816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.694829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.702834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.702845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.710857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.390 [2024-07-11 13:50:05.710870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.390 [2024-07-11 13:50:05.718876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.391 [2024-07-11 13:50:05.718888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.391 [2024-07-11 13:50:05.726895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.391 [2024-07-11 13:50:05.726905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.391 [2024-07-11 13:50:05.738934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.391 [2024-07-11 13:50:05.738945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.391 [2024-07-11 13:50:05.750965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.391 [2024-07-11 13:50:05.750976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.391 [2024-07-11 13:50:05.758982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.391 [2024-07-11 13:50:05.758992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.391 [2024-07-11 13:50:05.767004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.391 [2024-07-11 13:50:05.767013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.391 [2024-07-11 13:50:05.775027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.391 [2024-07-11 13:50:05.775037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.391 [2024-07-11 13:50:05.787059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.391 [2024-07-11 13:50:05.787069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.391 [2024-07-11 13:50:05.795082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:03.391 [2024-07-11 13:50:05.795092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:03.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1608530) - No such process 00:20:03.391 13:50:05 -- target/zcopy.sh@49 -- # wait 1608530 00:20:03.391 13:50:05 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:03.391 13:50:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.391 13:50:05 -- common/autotest_common.sh@10 -- # set +x 00:20:03.391 13:50:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.391 13:50:05 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:03.391 13:50:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.391 13:50:05 -- common/autotest_common.sh@10 -- # set +x 00:20:03.391 delay0 00:20:03.391 13:50:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.391 13:50:05 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:03.391 13:50:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:03.391 13:50:05 -- common/autotest_common.sh@10 -- # set +x 00:20:03.391 13:50:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:03.391 13:50:05 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:03.649 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.649 [2024-07-11 13:50:05.962318] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:10.216 [2024-07-11 13:50:12.224592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19df880 is same with the state(5) to be set 00:20:10.216 [2024-07-11 13:50:12.224636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19df880 is same with the state(5) to be set 00:20:10.216 Initializing NVMe Controllers 00:20:10.216 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:10.216 Initialization complete. Launching workers. 00:20:10.216 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1612 00:20:10.216 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1899, failed to submit 33 00:20:10.216 success 1685, unsuccess 214, failed 0 00:20:10.216 13:50:12 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:10.216 13:50:12 -- target/zcopy.sh@60 -- # nvmftestfini 00:20:10.216 13:50:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:10.216 13:50:12 -- nvmf/common.sh@116 -- # sync 00:20:10.216 13:50:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:10.216 13:50:12 -- nvmf/common.sh@119 -- # set +e 00:20:10.216 13:50:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:10.216 13:50:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:10.216 rmmod nvme_tcp 00:20:10.216 rmmod nvme_fabrics 00:20:10.216 rmmod nvme_keyring 00:20:10.216 13:50:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:10.216 13:50:12 -- nvmf/common.sh@123 -- # set -e 00:20:10.216 13:50:12 -- nvmf/common.sh@124 -- # return 0 00:20:10.216 13:50:12 -- nvmf/common.sh@477 -- # '[' -n 1606634 ']' 00:20:10.216 13:50:12 -- nvmf/common.sh@478 -- # killprocess 1606634 00:20:10.216 13:50:12 -- common/autotest_common.sh@926 -- # '[' -z 1606634 ']' 00:20:10.216 13:50:12 -- common/autotest_common.sh@930 -- # kill -0 1606634 00:20:10.216 13:50:12 -- common/autotest_common.sh@931 -- # uname 00:20:10.216 13:50:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:10.216 13:50:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1606634 00:20:10.216 13:50:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:10.216 13:50:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:10.216 13:50:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1606634' 00:20:10.216 killing process with pid 1606634 00:20:10.216 13:50:12 -- common/autotest_common.sh@945 -- # kill 1606634 00:20:10.216 13:50:12 -- common/autotest_common.sh@950 -- # wait 1606634 00:20:10.216 13:50:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:10.216 13:50:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:10.216 13:50:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:10.216 13:50:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.216 13:50:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:10.216 13:50:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.216 13:50:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.216 13:50:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.124 13:50:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:12.124 00:20:12.124 real 0m31.316s 00:20:12.124 user 0m42.871s 00:20:12.124 sys 0m10.481s 00:20:12.124 13:50:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.383 13:50:14 -- common/autotest_common.sh@10 -- # set +x 00:20:12.383 ************************************ 00:20:12.383 END TEST nvmf_zcopy 00:20:12.383 ************************************ 00:20:12.383 13:50:14 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:12.383 13:50:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:12.383 13:50:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:12.383 13:50:14 -- common/autotest_common.sh@10 -- # set +x 00:20:12.383 ************************************ 00:20:12.383 START TEST nvmf_nmic 00:20:12.383 ************************************ 00:20:12.383 13:50:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:12.384 * Looking for test storage... 00:20:12.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:12.384 13:50:14 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.384 13:50:14 -- nvmf/common.sh@7 -- # uname -s 00:20:12.384 13:50:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.384 13:50:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.384 13:50:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.384 13:50:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.384 13:50:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.384 13:50:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.384 13:50:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.384 13:50:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.384 13:50:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.384 13:50:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.384 13:50:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:12.384 13:50:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:12.384 13:50:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.384 13:50:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.384 13:50:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.384 13:50:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.384 13:50:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.384 13:50:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.384 13:50:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.384 13:50:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.384 13:50:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.384 13:50:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.384 13:50:14 -- paths/export.sh@5 -- # export PATH 00:20:12.384 13:50:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.384 13:50:14 -- nvmf/common.sh@46 -- # : 0 00:20:12.384 13:50:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:12.384 13:50:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:12.384 13:50:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:12.384 13:50:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.384 13:50:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.384 13:50:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:12.384 13:50:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:12.384 13:50:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:12.384 13:50:14 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.384 13:50:14 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.384 13:50:14 -- target/nmic.sh@14 -- # nvmftestinit 00:20:12.384 13:50:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:12.384 13:50:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.384 13:50:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:12.384 13:50:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:12.384 13:50:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:12.384 13:50:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.384 13:50:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.384 13:50:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.384 13:50:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:12.384 13:50:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:12.384 13:50:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:12.384 13:50:14 -- common/autotest_common.sh@10 -- # set +x 00:20:17.661 13:50:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:17.661 13:50:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:17.661 13:50:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:17.661 13:50:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:17.661 13:50:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:17.661 13:50:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:17.661 13:50:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:17.661 13:50:19 -- nvmf/common.sh@294 -- # net_devs=() 00:20:17.661 13:50:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:17.661 13:50:19 -- nvmf/common.sh@295 -- # e810=() 00:20:17.661 13:50:19 -- nvmf/common.sh@295 -- # local -ga e810 00:20:17.661 13:50:19 -- nvmf/common.sh@296 -- # x722=() 00:20:17.661 13:50:19 -- nvmf/common.sh@296 -- # local -ga x722 00:20:17.661 13:50:19 -- nvmf/common.sh@297 -- # mlx=() 00:20:17.661 13:50:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:17.661 13:50:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.661 13:50:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.661 13:50:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.661 13:50:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.661 13:50:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.661 13:50:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.661 13:50:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.661 13:50:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.661 13:50:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.661 13:50:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.661 13:50:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.661 13:50:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:17.661 13:50:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:17.661 13:50:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:17.661 13:50:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:17.661 13:50:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:17.661 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:17.661 13:50:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:17.661 13:50:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:17.661 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:17.661 13:50:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:17.661 13:50:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:17.661 13:50:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.661 13:50:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:17.661 13:50:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.661 13:50:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:17.661 Found net devices under 0000:86:00.0: cvl_0_0 00:20:17.661 13:50:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.661 13:50:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:17.661 13:50:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.661 13:50:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:17.661 13:50:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.661 13:50:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:17.661 Found net devices under 0000:86:00.1: cvl_0_1 00:20:17.661 13:50:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.661 13:50:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:17.661 13:50:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:17.661 13:50:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:17.661 13:50:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.661 13:50:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.661 13:50:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.661 13:50:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:17.661 13:50:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.661 13:50:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.661 13:50:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:17.661 13:50:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.661 13:50:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.661 13:50:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:17.661 13:50:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:17.661 13:50:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.661 13:50:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.661 13:50:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.661 13:50:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.661 13:50:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:17.661 13:50:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.661 13:50:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.661 13:50:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.661 13:50:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:17.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:20:17.661 00:20:17.661 --- 10.0.0.2 ping statistics --- 00:20:17.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.661 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:20:17.661 13:50:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:20:17.661 00:20:17.661 --- 10.0.0.1 ping statistics --- 00:20:17.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.661 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:20:17.661 13:50:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.661 13:50:19 -- nvmf/common.sh@410 -- # return 0 00:20:17.661 13:50:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:17.661 13:50:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.661 13:50:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:17.661 13:50:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.661 13:50:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:17.661 13:50:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:17.661 13:50:19 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:17.661 13:50:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:17.661 13:50:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:17.661 13:50:19 -- common/autotest_common.sh@10 -- # set +x 00:20:17.661 13:50:19 -- nvmf/common.sh@469 -- # nvmfpid=1613944 00:20:17.662 13:50:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:17.662 13:50:19 -- nvmf/common.sh@470 -- # waitforlisten 1613944 00:20:17.662 13:50:19 -- common/autotest_common.sh@819 -- # '[' -z 1613944 ']' 00:20:17.662 13:50:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.662 13:50:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:17.662 13:50:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.662 13:50:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:17.662 13:50:19 -- common/autotest_common.sh@10 -- # set +x 00:20:17.662 [2024-07-11 13:50:19.957662] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:17.662 [2024-07-11 13:50:19.957706] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.662 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.662 [2024-07-11 13:50:20.016444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.662 [2024-07-11 13:50:20.058329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:17.662 [2024-07-11 13:50:20.058438] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.662 [2024-07-11 13:50:20.058447] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.662 [2024-07-11 13:50:20.058453] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.662 [2024-07-11 13:50:20.058510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.662 [2024-07-11 13:50:20.058607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.662 [2024-07-11 13:50:20.058700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.662 [2024-07-11 13:50:20.058700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.597 13:50:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:18.597 13:50:20 -- common/autotest_common.sh@852 -- # return 0 00:20:18.597 13:50:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:18.597 13:50:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:18.597 13:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.597 13:50:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.597 13:50:20 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:18.597 13:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.597 13:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.597 [2024-07-11 13:50:20.805627] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.597 13:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.597 13:50:20 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:18.597 13:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.597 13:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.597 Malloc0 00:20:18.597 13:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.597 13:50:20 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:18.597 13:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.597 13:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.597 13:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.597 13:50:20 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:18.597 13:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.597 13:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.597 13:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.597 13:50:20 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.597 13:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.597 13:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.597 [2024-07-11 13:50:20.857425] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.597 13:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.597 13:50:20 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:18.597 test case1: single bdev can't be used in multiple subsystems 00:20:18.597 13:50:20 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:18.597 13:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.597 13:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.597 13:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.597 13:50:20 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:18.597 13:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.597 13:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.597 13:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.597 13:50:20 -- target/nmic.sh@28 -- # nmic_status=0 00:20:18.597 13:50:20 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:18.597 13:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.597 13:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.597 [2024-07-11 13:50:20.885337] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:18.597 [2024-07-11 13:50:20.885357] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:18.597 [2024-07-11 13:50:20.885365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:18.597 request: 00:20:18.597 { 00:20:18.597 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:18.597 "namespace": { 00:20:18.597 "bdev_name": "Malloc0" 00:20:18.597 }, 00:20:18.597 "method": "nvmf_subsystem_add_ns", 00:20:18.597 "req_id": 1 00:20:18.597 } 00:20:18.597 Got JSON-RPC error response 00:20:18.597 response: 00:20:18.597 { 00:20:18.597 "code": -32602, 00:20:18.597 "message": "Invalid parameters" 00:20:18.597 } 00:20:18.597 13:50:20 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:20:18.597 13:50:20 -- target/nmic.sh@29 -- # nmic_status=1 00:20:18.597 13:50:20 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:18.597 13:50:20 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:18.597 Adding namespace failed - expected result. 00:20:18.597 13:50:20 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:18.597 test case2: host connect to nvmf target in multiple paths 00:20:18.597 13:50:20 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:18.597 13:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:18.597 13:50:20 -- common/autotest_common.sh@10 -- # set +x 00:20:18.597 [2024-07-11 13:50:20.897465] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:18.597 13:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:18.597 13:50:20 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:20.009 13:50:22 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:20.945 13:50:23 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:20.945 13:50:23 -- common/autotest_common.sh@1177 -- # local i=0 00:20:20.945 13:50:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:20.945 13:50:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:20.945 13:50:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:22.847 13:50:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:22.847 13:50:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:22.847 13:50:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:22.847 13:50:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:22.847 13:50:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:22.847 13:50:25 -- common/autotest_common.sh@1187 -- # return 0 00:20:22.847 13:50:25 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:23.112 [global] 00:20:23.112 thread=1 00:20:23.112 invalidate=1 00:20:23.112 rw=write 00:20:23.112 time_based=1 00:20:23.112 runtime=1 00:20:23.112 ioengine=libaio 00:20:23.112 direct=1 00:20:23.112 bs=4096 00:20:23.112 iodepth=1 00:20:23.112 norandommap=0 00:20:23.112 numjobs=1 00:20:23.112 00:20:23.112 verify_dump=1 00:20:23.112 verify_backlog=512 00:20:23.112 verify_state_save=0 00:20:23.112 do_verify=1 00:20:23.112 verify=crc32c-intel 00:20:23.112 [job0] 00:20:23.112 filename=/dev/nvme0n1 00:20:23.112 Could not set queue depth (nvme0n1) 00:20:23.368 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:23.368 fio-3.35 00:20:23.368 Starting 1 thread 00:20:24.297 00:20:24.297 job0: (groupid=0, jobs=1): err= 0: pid=1615035: Thu Jul 11 13:50:26 2024 00:20:24.297 read: IOPS=1762, BW=7049KiB/s (7218kB/s)(7056KiB/1001msec) 00:20:24.297 slat (nsec): min=6517, max=27626, avg=7320.24, stdev=975.23 00:20:24.297 clat (usec): min=286, max=511, avg=335.37, stdev=17.24 00:20:24.297 lat (usec): min=293, max=519, avg=342.69, stdev=17.27 00:20:24.297 clat percentiles (usec): 00:20:24.297 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:20:24.297 | 30.00th=[ 326], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:20:24.297 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 359], 95.00th=[ 363], 00:20:24.297 | 99.00th=[ 371], 99.50th=[ 375], 99.90th=[ 498], 99.95th=[ 510], 00:20:24.297 | 99.99th=[ 510] 00:20:24.297 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:24.297 slat (nsec): min=9062, max=39162, avg=10217.40, stdev=1222.77 00:20:24.297 clat (usec): min=156, max=386, avg=179.04, stdev= 8.90 00:20:24.297 lat (usec): min=166, max=425, avg=189.26, stdev= 9.28 00:20:24.297 clat percentiles (usec): 00:20:24.297 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 174], 00:20:24.297 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 180], 00:20:24.297 | 70.00th=[ 182], 80.00th=[ 184], 90.00th=[ 188], 95.00th=[ 190], 00:20:24.297 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 277], 99.95th=[ 281], 00:20:24.297 | 99.99th=[ 388] 00:20:24.297 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:20:24.297 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:24.297 lat (usec) : 250=53.65%, 500=46.33%, 750=0.03% 00:20:24.297 cpu : usr=1.90%, sys=3.40%, ctx=3812, majf=0, minf=2 00:20:24.297 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.297 issued rwts: total=1764,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.297 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:24.297 00:20:24.297 Run status group 0 (all jobs): 00:20:24.297 READ: bw=7049KiB/s (7218kB/s), 7049KiB/s-7049KiB/s (7218kB/s-7218kB/s), io=7056KiB (7225kB), run=1001-1001msec 00:20:24.297 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:20:24.297 00:20:24.297 Disk stats (read/write): 00:20:24.297 nvme0n1: ios=1586/1924, merge=0/0, ticks=519/340, in_queue=859, util=91.38% 00:20:24.555 13:50:26 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:24.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:24.555 13:50:26 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:24.555 13:50:26 -- common/autotest_common.sh@1198 -- # local i=0 00:20:24.555 13:50:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:24.555 13:50:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:24.555 13:50:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:24.555 13:50:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:24.555 13:50:26 -- common/autotest_common.sh@1210 -- # return 0 00:20:24.555 13:50:26 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:24.555 13:50:26 -- target/nmic.sh@53 -- # nvmftestfini 00:20:24.555 13:50:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:24.555 13:50:26 -- nvmf/common.sh@116 -- # sync 00:20:24.555 13:50:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:24.555 13:50:26 -- nvmf/common.sh@119 -- # set +e 00:20:24.555 13:50:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:24.555 13:50:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:24.555 rmmod nvme_tcp 00:20:24.555 rmmod nvme_fabrics 00:20:24.555 rmmod nvme_keyring 00:20:24.555 13:50:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:24.555 13:50:26 -- nvmf/common.sh@123 -- # set -e 00:20:24.555 13:50:26 -- nvmf/common.sh@124 -- # return 0 00:20:24.555 13:50:26 -- nvmf/common.sh@477 -- # '[' -n 1613944 ']' 00:20:24.555 13:50:26 -- nvmf/common.sh@478 -- # killprocess 1613944 00:20:24.555 13:50:26 -- common/autotest_common.sh@926 -- # '[' -z 1613944 ']' 00:20:24.555 13:50:26 -- common/autotest_common.sh@930 -- # kill -0 1613944 00:20:24.555 13:50:26 -- common/autotest_common.sh@931 -- # uname 00:20:24.555 13:50:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:24.555 13:50:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1613944 00:20:24.811 13:50:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:24.811 13:50:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:24.811 13:50:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1613944' 00:20:24.811 killing process with pid 1613944 00:20:24.811 13:50:27 -- common/autotest_common.sh@945 -- # kill 1613944 00:20:24.811 13:50:27 -- common/autotest_common.sh@950 -- # wait 1613944 00:20:24.811 13:50:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:24.811 13:50:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:24.811 13:50:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:24.811 13:50:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.811 13:50:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:24.811 13:50:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.811 13:50:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.811 13:50:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.338 13:50:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:27.338 00:20:27.338 real 0m14.669s 00:20:27.338 user 0m35.188s 00:20:27.338 sys 0m4.752s 00:20:27.338 13:50:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.338 13:50:29 -- common/autotest_common.sh@10 -- # set +x 00:20:27.338 ************************************ 00:20:27.338 END TEST nvmf_nmic 00:20:27.338 ************************************ 00:20:27.338 13:50:29 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:27.338 13:50:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:27.338 13:50:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:27.338 13:50:29 -- common/autotest_common.sh@10 -- # set +x 00:20:27.338 ************************************ 00:20:27.338 START TEST nvmf_fio_target 00:20:27.338 ************************************ 00:20:27.338 13:50:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:27.338 * Looking for test storage... 00:20:27.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:27.338 13:50:29 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.338 13:50:29 -- nvmf/common.sh@7 -- # uname -s 00:20:27.338 13:50:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.338 13:50:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.338 13:50:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.338 13:50:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.338 13:50:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.338 13:50:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.338 13:50:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.338 13:50:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.338 13:50:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.338 13:50:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.338 13:50:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:27.338 13:50:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:27.338 13:50:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.338 13:50:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.338 13:50:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.338 13:50:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.338 13:50:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.338 13:50:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.338 13:50:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.338 13:50:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.338 13:50:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.338 13:50:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.338 13:50:29 -- paths/export.sh@5 -- # export PATH 00:20:27.338 13:50:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.338 13:50:29 -- nvmf/common.sh@46 -- # : 0 00:20:27.338 13:50:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:27.338 13:50:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:27.338 13:50:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:27.338 13:50:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.338 13:50:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.338 13:50:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:27.338 13:50:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:27.338 13:50:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:27.338 13:50:29 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:27.338 13:50:29 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:27.338 13:50:29 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:27.338 13:50:29 -- target/fio.sh@16 -- # nvmftestinit 00:20:27.338 13:50:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:27.338 13:50:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.338 13:50:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:27.338 13:50:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:27.338 13:50:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:27.338 13:50:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.338 13:50:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.338 13:50:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.338 13:50:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:27.338 13:50:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:27.338 13:50:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:27.338 13:50:29 -- common/autotest_common.sh@10 -- # set +x 00:20:32.630 13:50:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:32.630 13:50:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:32.630 13:50:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:32.630 13:50:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:32.630 13:50:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:32.630 13:50:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:32.630 13:50:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:32.630 13:50:34 -- nvmf/common.sh@294 -- # net_devs=() 00:20:32.630 13:50:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:32.630 13:50:34 -- nvmf/common.sh@295 -- # e810=() 00:20:32.630 13:50:34 -- nvmf/common.sh@295 -- # local -ga e810 00:20:32.630 13:50:34 -- nvmf/common.sh@296 -- # x722=() 00:20:32.630 13:50:34 -- nvmf/common.sh@296 -- # local -ga x722 00:20:32.631 13:50:34 -- nvmf/common.sh@297 -- # mlx=() 00:20:32.631 13:50:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:32.631 13:50:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.631 13:50:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.631 13:50:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.631 13:50:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.631 13:50:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.631 13:50:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.631 13:50:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.631 13:50:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.631 13:50:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.631 13:50:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.631 13:50:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.631 13:50:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:32.631 13:50:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:32.631 13:50:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:32.631 13:50:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:32.631 13:50:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:32.631 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:32.631 13:50:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:32.631 13:50:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:32.631 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:32.631 13:50:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:32.631 13:50:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:32.631 13:50:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.631 13:50:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:32.631 13:50:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.631 13:50:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:32.631 Found net devices under 0000:86:00.0: cvl_0_0 00:20:32.631 13:50:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.631 13:50:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:32.631 13:50:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.631 13:50:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:32.631 13:50:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.631 13:50:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:32.631 Found net devices under 0000:86:00.1: cvl_0_1 00:20:32.631 13:50:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.631 13:50:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:32.631 13:50:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:32.631 13:50:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:32.631 13:50:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.631 13:50:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.631 13:50:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.631 13:50:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:32.631 13:50:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.631 13:50:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.631 13:50:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:32.631 13:50:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.631 13:50:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.631 13:50:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:32.631 13:50:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:32.631 13:50:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.631 13:50:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.631 13:50:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.631 13:50:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.631 13:50:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:32.631 13:50:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.631 13:50:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.631 13:50:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.631 13:50:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:32.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:20:32.631 00:20:32.631 --- 10.0.0.2 ping statistics --- 00:20:32.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.631 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:20:32.631 13:50:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:20:32.631 00:20:32.631 --- 10.0.0.1 ping statistics --- 00:20:32.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.631 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:20:32.631 13:50:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.631 13:50:34 -- nvmf/common.sh@410 -- # return 0 00:20:32.631 13:50:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:32.631 13:50:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.631 13:50:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:32.631 13:50:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.631 13:50:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:32.631 13:50:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:32.631 13:50:34 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:32.631 13:50:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:32.631 13:50:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:32.631 13:50:34 -- common/autotest_common.sh@10 -- # set +x 00:20:32.631 13:50:34 -- nvmf/common.sh@469 -- # nvmfpid=1618802 00:20:32.631 13:50:34 -- nvmf/common.sh@470 -- # waitforlisten 1618802 00:20:32.631 13:50:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:32.631 13:50:34 -- common/autotest_common.sh@819 -- # '[' -z 1618802 ']' 00:20:32.631 13:50:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.631 13:50:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:32.631 13:50:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.631 13:50:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:32.631 13:50:34 -- common/autotest_common.sh@10 -- # set +x 00:20:32.631 [2024-07-11 13:50:34.698753] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:32.631 [2024-07-11 13:50:34.698793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.631 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.631 [2024-07-11 13:50:34.758916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:32.631 [2024-07-11 13:50:34.797499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:32.631 [2024-07-11 13:50:34.797608] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.631 [2024-07-11 13:50:34.797616] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.631 [2024-07-11 13:50:34.797624] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.631 [2024-07-11 13:50:34.797735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.631 [2024-07-11 13:50:34.797843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.631 [2024-07-11 13:50:34.797951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:32.631 [2024-07-11 13:50:34.797956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.198 13:50:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:33.198 13:50:35 -- common/autotest_common.sh@852 -- # return 0 00:20:33.198 13:50:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:33.198 13:50:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:33.198 13:50:35 -- common/autotest_common.sh@10 -- # set +x 00:20:33.198 13:50:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.198 13:50:35 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:33.458 [2024-07-11 13:50:35.693077] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.458 13:50:35 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:33.717 13:50:35 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:33.717 13:50:35 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:33.717 13:50:36 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:33.717 13:50:36 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:33.975 13:50:36 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:33.976 13:50:36 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:34.234 13:50:36 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:34.234 13:50:36 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:34.234 13:50:36 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:34.520 13:50:36 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:34.520 13:50:36 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:34.779 13:50:37 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:34.779 13:50:37 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:34.779 13:50:37 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:34.779 13:50:37 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:35.038 13:50:37 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:35.297 13:50:37 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:35.297 13:50:37 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:35.297 13:50:37 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:35.297 13:50:37 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:35.556 13:50:37 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.815 [2024-07-11 13:50:38.078773] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.815 13:50:38 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:36.073 13:50:38 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:36.073 13:50:38 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:37.452 13:50:39 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:37.452 13:50:39 -- common/autotest_common.sh@1177 -- # local i=0 00:20:37.452 13:50:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:37.452 13:50:39 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:20:37.452 13:50:39 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:20:37.452 13:50:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:39.357 13:50:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:39.357 13:50:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:39.357 13:50:41 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:39.357 13:50:41 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:20:39.357 13:50:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:39.357 13:50:41 -- common/autotest_common.sh@1187 -- # return 0 00:20:39.357 13:50:41 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:39.358 [global] 00:20:39.358 thread=1 00:20:39.358 invalidate=1 00:20:39.358 rw=write 00:20:39.358 time_based=1 00:20:39.358 runtime=1 00:20:39.358 ioengine=libaio 00:20:39.358 direct=1 00:20:39.358 bs=4096 00:20:39.358 iodepth=1 00:20:39.358 norandommap=0 00:20:39.358 numjobs=1 00:20:39.358 00:20:39.358 verify_dump=1 00:20:39.358 verify_backlog=512 00:20:39.358 verify_state_save=0 00:20:39.358 do_verify=1 00:20:39.358 verify=crc32c-intel 00:20:39.358 [job0] 00:20:39.358 filename=/dev/nvme0n1 00:20:39.358 [job1] 00:20:39.358 filename=/dev/nvme0n2 00:20:39.358 [job2] 00:20:39.358 filename=/dev/nvme0n3 00:20:39.358 [job3] 00:20:39.358 filename=/dev/nvme0n4 00:20:39.358 Could not set queue depth (nvme0n1) 00:20:39.358 Could not set queue depth (nvme0n2) 00:20:39.358 Could not set queue depth (nvme0n3) 00:20:39.358 Could not set queue depth (nvme0n4) 00:20:39.617 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:39.617 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:39.617 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:39.617 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:39.617 fio-3.35 00:20:39.617 Starting 4 threads 00:20:41.031 00:20:41.031 job0: (groupid=0, jobs=1): err= 0: pid=1620168: Thu Jul 11 13:50:43 2024 00:20:41.031 read: IOPS=1780, BW=7121KiB/s (7292kB/s)(7128KiB/1001msec) 00:20:41.031 slat (nsec): min=6927, max=39624, avg=7911.68, stdev=1548.65 00:20:41.031 clat (usec): min=234, max=41499, avg=314.23, stdev=1005.95 00:20:41.031 lat (usec): min=242, max=41518, avg=322.14, stdev=1006.30 00:20:41.031 clat percentiles (usec): 00:20:41.031 | 1.00th=[ 243], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 262], 00:20:41.031 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:20:41.031 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 379], 00:20:41.031 | 99.00th=[ 453], 99.50th=[ 465], 99.90th=[10421], 99.95th=[41681], 00:20:41.031 | 99.99th=[41681] 00:20:41.031 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:41.031 slat (nsec): min=10020, max=41492, avg=11466.65, stdev=1843.24 00:20:41.031 clat (usec): min=152, max=411, avg=190.42, stdev=23.45 00:20:41.031 lat (usec): min=167, max=424, avg=201.89, stdev=23.68 00:20:41.031 clat percentiles (usec): 00:20:41.031 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 172], 00:20:41.031 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:20:41.031 | 70.00th=[ 196], 80.00th=[ 210], 90.00th=[ 229], 95.00th=[ 241], 00:20:41.031 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 281], 99.95th=[ 297], 00:20:41.031 | 99.99th=[ 412] 00:20:41.031 bw ( KiB/s): min= 8192, max= 8192, per=58.86%, avg=8192.00, stdev= 0.00, samples=1 00:20:41.031 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:41.031 lat (usec) : 250=54.54%, 500=45.40% 00:20:41.031 lat (msec) : 20=0.03%, 50=0.03% 00:20:41.031 cpu : usr=3.60%, sys=5.70%, ctx=3831, majf=0, minf=1 00:20:41.031 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.031 issued rwts: total=1782,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.031 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:41.031 job1: (groupid=0, jobs=1): err= 0: pid=1620169: Thu Jul 11 13:50:43 2024 00:20:41.031 read: IOPS=22, BW=90.8KiB/s (93.0kB/s)(92.0KiB/1013msec) 00:20:41.031 slat (nsec): min=9658, max=21968, avg=19993.65, stdev=3176.91 00:20:41.031 clat (usec): min=366, max=41213, avg=39194.12, stdev=8464.89 00:20:41.031 lat (usec): min=387, max=41223, avg=39214.11, stdev=8464.74 00:20:41.031 clat percentiles (usec): 00:20:41.031 | 1.00th=[ 367], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:20:41.031 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:41.031 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:41.031 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:41.031 | 99.99th=[41157] 00:20:41.031 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:20:41.031 slat (nsec): min=8744, max=40687, avg=11287.73, stdev=2309.38 00:20:41.031 clat (usec): min=166, max=387, avg=201.23, stdev=17.31 00:20:41.031 lat (usec): min=176, max=412, avg=212.51, stdev=18.21 00:20:41.031 clat percentiles (usec): 00:20:41.031 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:20:41.031 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 202], 00:20:41.031 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 229], 00:20:41.031 | 99.00th=[ 249], 99.50th=[ 277], 99.90th=[ 388], 99.95th=[ 388], 00:20:41.031 | 99.99th=[ 388] 00:20:41.031 bw ( KiB/s): min= 4096, max= 4096, per=29.43%, avg=4096.00, stdev= 0.00, samples=1 00:20:41.031 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:41.031 lat (usec) : 250=94.77%, 500=1.12% 00:20:41.031 lat (msec) : 50=4.11% 00:20:41.031 cpu : usr=0.20%, sys=0.59%, ctx=536, majf=0, minf=2 00:20:41.031 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.031 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.031 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:41.031 job2: (groupid=0, jobs=1): err= 0: pid=1620170: Thu Jul 11 13:50:43 2024 00:20:41.031 read: IOPS=200, BW=804KiB/s (823kB/s)(828KiB/1030msec) 00:20:41.031 slat (nsec): min=7616, max=27663, avg=9856.65, stdev=3669.33 00:20:41.031 clat (usec): min=262, max=42006, avg=4253.74, stdev=12071.72 00:20:41.031 lat (usec): min=271, max=42025, avg=4263.60, stdev=12074.39 00:20:41.031 clat percentiles (usec): 00:20:41.031 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 289], 00:20:41.032 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:20:41.032 | 70.00th=[ 310], 80.00th=[ 359], 90.00th=[ 1565], 95.00th=[41157], 00:20:41.032 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:20:41.032 | 99.99th=[42206] 00:20:41.032 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:20:41.032 slat (usec): min=10, max=35579, avg=83.15, stdev=1571.82 00:20:41.032 clat (usec): min=148, max=385, avg=198.75, stdev=24.52 00:20:41.032 lat (usec): min=164, max=35942, avg=281.90, stdev=1579.27 00:20:41.032 clat percentiles (usec): 00:20:41.032 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:20:41.032 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:20:41.032 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 233], 00:20:41.032 | 99.00th=[ 247], 99.50th=[ 285], 99.90th=[ 388], 99.95th=[ 388], 00:20:41.032 | 99.99th=[ 388] 00:20:41.032 bw ( KiB/s): min= 4096, max= 4096, per=29.43%, avg=4096.00, stdev= 0.00, samples=1 00:20:41.032 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:41.032 lat (usec) : 250=70.65%, 500=26.29%, 750=0.14% 00:20:41.032 lat (msec) : 2=0.14%, 50=2.78% 00:20:41.032 cpu : usr=0.58%, sys=0.87%, ctx=721, majf=0, minf=1 00:20:41.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.032 issued rwts: total=207,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:41.032 job3: (groupid=0, jobs=1): err= 0: pid=1620171: Thu Jul 11 13:50:43 2024 00:20:41.032 read: IOPS=84, BW=338KiB/s (346kB/s)(340KiB/1006msec) 00:20:41.032 slat (nsec): min=7852, max=26517, avg=12108.24, stdev=6090.53 00:20:41.032 clat (usec): min=279, max=41536, avg=10367.19, stdev=17638.22 00:20:41.032 lat (usec): min=287, max=41559, avg=10379.30, stdev=17644.11 00:20:41.032 clat percentiles (usec): 00:20:41.032 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 289], 20.00th=[ 297], 00:20:41.032 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 371], 00:20:41.032 | 70.00th=[ 396], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:20:41.032 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:20:41.032 | 99.99th=[41681] 00:20:41.032 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:20:41.032 slat (usec): min=3, max=104, avg=12.56, stdev=10.33 00:20:41.032 clat (usec): min=145, max=1001, avg=223.71, stdev=59.77 00:20:41.032 lat (usec): min=180, max=1013, avg=236.27, stdev=61.43 00:20:41.032 clat percentiles (usec): 00:20:41.032 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 200], 00:20:41.032 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 219], 00:20:41.032 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 285], 00:20:41.032 | 99.00th=[ 478], 99.50th=[ 635], 99.90th=[ 1004], 99.95th=[ 1004], 00:20:41.032 | 99.99th=[ 1004] 00:20:41.032 bw ( KiB/s): min= 4096, max= 4096, per=29.43%, avg=4096.00, stdev= 0.00, samples=1 00:20:41.032 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:41.032 lat (usec) : 250=77.72%, 500=17.92%, 750=0.67% 00:20:41.032 lat (msec) : 2=0.17%, 50=3.52% 00:20:41.032 cpu : usr=0.90%, sys=0.10%, ctx=600, majf=0, minf=1 00:20:41.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.032 issued rwts: total=85,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:41.032 00:20:41.032 Run status group 0 (all jobs): 00:20:41.032 READ: bw=8144KiB/s (8339kB/s), 90.8KiB/s-7121KiB/s (93.0kB/s-7292kB/s), io=8388KiB (8589kB), run=1001-1030msec 00:20:41.032 WRITE: bw=13.6MiB/s (14.3MB/s), 1988KiB/s-8184KiB/s (2036kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1030msec 00:20:41.032 00:20:41.032 Disk stats (read/write): 00:20:41.032 nvme0n1: ios=1586/1676, merge=0/0, ticks=489/306, in_queue=795, util=87.07% 00:20:41.032 nvme0n2: ios=43/512, merge=0/0, ticks=1641/99, in_queue=1740, util=89.84% 00:20:41.032 nvme0n3: ios=265/512, merge=0/0, ticks=953/99, in_queue=1052, util=93.44% 00:20:41.032 nvme0n4: ios=144/512, merge=0/0, ticks=1271/115, in_queue=1386, util=95.49% 00:20:41.032 13:50:43 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:41.032 [global] 00:20:41.032 thread=1 00:20:41.032 invalidate=1 00:20:41.032 rw=randwrite 00:20:41.032 time_based=1 00:20:41.032 runtime=1 00:20:41.032 ioengine=libaio 00:20:41.032 direct=1 00:20:41.032 bs=4096 00:20:41.032 iodepth=1 00:20:41.032 norandommap=0 00:20:41.032 numjobs=1 00:20:41.032 00:20:41.032 verify_dump=1 00:20:41.032 verify_backlog=512 00:20:41.032 verify_state_save=0 00:20:41.032 do_verify=1 00:20:41.032 verify=crc32c-intel 00:20:41.032 [job0] 00:20:41.032 filename=/dev/nvme0n1 00:20:41.032 [job1] 00:20:41.032 filename=/dev/nvme0n2 00:20:41.032 [job2] 00:20:41.032 filename=/dev/nvme0n3 00:20:41.032 [job3] 00:20:41.032 filename=/dev/nvme0n4 00:20:41.032 Could not set queue depth (nvme0n1) 00:20:41.032 Could not set queue depth (nvme0n2) 00:20:41.032 Could not set queue depth (nvme0n3) 00:20:41.032 Could not set queue depth (nvme0n4) 00:20:41.032 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:41.032 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:41.032 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:41.032 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:41.032 fio-3.35 00:20:41.032 Starting 4 threads 00:20:42.410 00:20:42.410 job0: (groupid=0, jobs=1): err= 0: pid=1620551: Thu Jul 11 13:50:44 2024 00:20:42.410 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:20:42.410 slat (nsec): min=10117, max=28843, avg=22417.82, stdev=3178.81 00:20:42.410 clat (usec): min=40860, max=41935, avg=41032.19, stdev=230.03 00:20:42.410 lat (usec): min=40883, max=41964, avg=41054.61, stdev=230.21 00:20:42.410 clat percentiles (usec): 00:20:42.410 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:20:42.410 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:42.410 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:42.410 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:20:42.410 | 99.99th=[41681] 00:20:42.410 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:20:42.410 slat (usec): min=9, max=242, avg=14.95, stdev=16.36 00:20:42.410 clat (usec): min=136, max=359, avg=214.45, stdev=22.20 00:20:42.410 lat (usec): min=179, max=419, avg=229.40, stdev=26.27 00:20:42.410 clat percentiles (usec): 00:20:42.410 | 1.00th=[ 169], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:20:42.410 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:20:42.410 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 251], 00:20:42.410 | 99.00th=[ 265], 99.50th=[ 293], 99.90th=[ 359], 99.95th=[ 359], 00:20:42.410 | 99.99th=[ 359] 00:20:42.410 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:20:42.410 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:42.410 lat (usec) : 250=90.82%, 500=5.06% 00:20:42.410 lat (msec) : 50=4.12% 00:20:42.410 cpu : usr=0.20%, sys=1.08%, ctx=536, majf=0, minf=1 00:20:42.410 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:42.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.411 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:42.411 job1: (groupid=0, jobs=1): err= 0: pid=1620552: Thu Jul 11 13:50:44 2024 00:20:42.411 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:20:42.411 slat (nsec): min=9878, max=20777, avg=19698.36, stdev=2242.20 00:20:42.411 clat (usec): min=40868, max=41954, avg=41082.61, stdev=301.58 00:20:42.411 lat (usec): min=40889, max=41974, avg=41102.31, stdev=300.90 00:20:42.411 clat percentiles (usec): 00:20:42.411 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:20:42.411 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:42.411 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:20:42.411 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:42.411 | 99.99th=[42206] 00:20:42.411 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:20:42.411 slat (nsec): min=9907, max=43731, avg=11642.36, stdev=3090.04 00:20:42.411 clat (usec): min=163, max=430, avg=234.32, stdev=41.07 00:20:42.411 lat (usec): min=173, max=459, avg=245.97, stdev=41.71 00:20:42.411 clat percentiles (usec): 00:20:42.411 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 206], 00:20:42.411 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 233], 00:20:42.411 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 289], 95.00th=[ 322], 00:20:42.411 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 433], 99.95th=[ 433], 00:20:42.411 | 99.99th=[ 433] 00:20:42.411 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:20:42.411 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:42.411 lat (usec) : 250=74.91%, 500=20.97% 00:20:42.411 lat (msec) : 50=4.12% 00:20:42.411 cpu : usr=0.00%, sys=1.07%, ctx=534, majf=0, minf=1 00:20:42.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:42.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.411 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:42.411 job2: (groupid=0, jobs=1): err= 0: pid=1620553: Thu Jul 11 13:50:44 2024 00:20:42.411 read: IOPS=520, BW=2081KiB/s (2131kB/s)(2116KiB/1017msec) 00:20:42.411 slat (nsec): min=7348, max=35924, avg=8631.93, stdev=2751.78 00:20:42.411 clat (usec): min=268, max=41036, avg=1459.38, stdev=6747.92 00:20:42.411 lat (usec): min=276, max=41058, avg=1468.02, stdev=6749.83 00:20:42.411 clat percentiles (usec): 00:20:42.411 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 297], 00:20:42.411 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 306], 60.00th=[ 310], 00:20:42.411 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 343], 00:20:42.411 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:20:42.411 | 99.99th=[41157] 00:20:42.411 write: IOPS=1006, BW=4028KiB/s (4124kB/s)(4096KiB/1017msec); 0 zone resets 00:20:42.411 slat (nsec): min=10843, max=35174, avg=12129.87, stdev=1988.15 00:20:42.411 clat (usec): min=180, max=510, avg=217.58, stdev=22.61 00:20:42.411 lat (usec): min=191, max=545, avg=229.71, stdev=23.05 00:20:42.411 clat percentiles (usec): 00:20:42.411 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 202], 00:20:42.411 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:20:42.411 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 262], 00:20:42.411 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 334], 99.95th=[ 510], 00:20:42.411 | 99.99th=[ 510] 00:20:42.411 bw ( KiB/s): min= 8192, max= 8192, per=51.60%, avg=8192.00, stdev= 0.00, samples=1 00:20:42.411 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:42.411 lat (usec) : 250=60.91%, 500=38.06%, 750=0.06% 00:20:42.411 lat (msec) : 50=0.97% 00:20:42.411 cpu : usr=1.97%, sys=1.77%, ctx=1554, majf=0, minf=2 00:20:42.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:42.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.411 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:42.411 job3: (groupid=0, jobs=1): err= 0: pid=1620554: Thu Jul 11 13:50:44 2024 00:20:42.411 read: IOPS=1952, BW=7808KiB/s (7996kB/s)(7816KiB/1001msec) 00:20:42.411 slat (nsec): min=6416, max=26775, avg=7292.77, stdev=998.03 00:20:42.411 clat (usec): min=230, max=482, avg=281.10, stdev=30.17 00:20:42.411 lat (usec): min=248, max=489, avg=288.39, stdev=30.23 00:20:42.411 clat percentiles (usec): 00:20:42.411 | 1.00th=[ 245], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 260], 00:20:42.411 | 30.00th=[ 265], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:20:42.411 | 70.00th=[ 285], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 330], 00:20:42.411 | 99.00th=[ 367], 99.50th=[ 453], 99.90th=[ 478], 99.95th=[ 482], 00:20:42.411 | 99.99th=[ 482] 00:20:42.411 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:42.411 slat (nsec): min=9414, max=36165, avg=11011.63, stdev=1959.89 00:20:42.411 clat (usec): min=156, max=440, avg=197.41, stdev=28.07 00:20:42.411 lat (usec): min=166, max=451, avg=208.43, stdev=28.89 00:20:42.411 clat percentiles (usec): 00:20:42.411 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:20:42.411 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:20:42.411 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 239], 95.00th=[ 251], 00:20:42.411 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 351], 99.95th=[ 355], 00:20:42.411 | 99.99th=[ 441] 00:20:42.411 bw ( KiB/s): min= 8192, max= 8192, per=51.60%, avg=8192.00, stdev= 0.00, samples=1 00:20:42.411 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:42.411 lat (usec) : 250=51.12%, 500=48.88% 00:20:42.411 cpu : usr=2.30%, sys=3.90%, ctx=4005, majf=0, minf=1 00:20:42.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:42.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.411 issued rwts: total=1954,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:42.411 00:20:42.411 Run status group 0 (all jobs): 00:20:42.411 READ: bw=9795KiB/s (10.0MB/s), 85.3KiB/s-7808KiB/s (87.3kB/s-7996kB/s), io=9.87MiB (10.3MB), run=1001-1032msec 00:20:42.411 WRITE: bw=15.5MiB/s (16.3MB/s), 1984KiB/s-8184KiB/s (2032kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1032msec 00:20:42.411 00:20:42.411 Disk stats (read/write): 00:20:42.411 nvme0n1: ios=54/512, merge=0/0, ticks=1655/105, in_queue=1760, util=98.20% 00:20:42.411 nvme0n2: ios=34/512, merge=0/0, ticks=797/115, in_queue=912, util=91.37% 00:20:42.411 nvme0n3: ios=549/1024, merge=0/0, ticks=1585/207, in_queue=1792, util=98.44% 00:20:42.411 nvme0n4: ios=1560/1897, merge=0/0, ticks=1412/366, in_queue=1778, util=98.43% 00:20:42.411 13:50:44 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:42.411 [global] 00:20:42.411 thread=1 00:20:42.411 invalidate=1 00:20:42.411 rw=write 00:20:42.411 time_based=1 00:20:42.411 runtime=1 00:20:42.411 ioengine=libaio 00:20:42.411 direct=1 00:20:42.411 bs=4096 00:20:42.411 iodepth=128 00:20:42.411 norandommap=0 00:20:42.411 numjobs=1 00:20:42.411 00:20:42.411 verify_dump=1 00:20:42.411 verify_backlog=512 00:20:42.411 verify_state_save=0 00:20:42.411 do_verify=1 00:20:42.411 verify=crc32c-intel 00:20:42.411 [job0] 00:20:42.411 filename=/dev/nvme0n1 00:20:42.411 [job1] 00:20:42.411 filename=/dev/nvme0n2 00:20:42.411 [job2] 00:20:42.411 filename=/dev/nvme0n3 00:20:42.411 [job3] 00:20:42.411 filename=/dev/nvme0n4 00:20:42.411 Could not set queue depth (nvme0n1) 00:20:42.411 Could not set queue depth (nvme0n2) 00:20:42.411 Could not set queue depth (nvme0n3) 00:20:42.411 Could not set queue depth (nvme0n4) 00:20:42.670 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:42.670 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:42.670 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:42.670 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:42.670 fio-3.35 00:20:42.670 Starting 4 threads 00:20:44.049 00:20:44.049 job0: (groupid=0, jobs=1): err= 0: pid=1620926: Thu Jul 11 13:50:46 2024 00:20:44.049 read: IOPS=5113, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:20:44.049 slat (nsec): min=1132, max=11732k, avg=78850.80, stdev=502960.61 00:20:44.049 clat (usec): min=389, max=29959, avg=11544.30, stdev=3412.62 00:20:44.049 lat (usec): min=2420, max=29965, avg=11623.15, stdev=3424.93 00:20:44.049 clat percentiles (usec): 00:20:44.049 | 1.00th=[ 5342], 5.00th=[ 7177], 10.00th=[ 8586], 20.00th=[ 9241], 00:20:44.049 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11338], 00:20:44.049 | 70.00th=[11731], 80.00th=[12518], 90.00th=[16057], 95.00th=[20055], 00:20:44.049 | 99.00th=[23462], 99.50th=[23987], 99.90th=[26346], 99.95th=[26346], 00:20:44.049 | 99.99th=[30016] 00:20:44.049 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:20:44.049 slat (nsec): min=1865, max=14750k, avg=91166.78, stdev=601635.00 00:20:44.049 clat (usec): min=2471, max=72037, avg=11923.13, stdev=6936.52 00:20:44.049 lat (usec): min=2477, max=72047, avg=12014.30, stdev=6953.20 00:20:44.049 clat percentiles (usec): 00:20:44.049 | 1.00th=[ 4490], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 8717], 00:20:44.049 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 00:20:44.049 | 70.00th=[12125], 80.00th=[12780], 90.00th=[15139], 95.00th=[18744], 00:20:44.049 | 99.00th=[51643], 99.50th=[64750], 99.90th=[70779], 99.95th=[71828], 00:20:44.049 | 99.99th=[71828] 00:20:44.049 bw ( KiB/s): min=21136, max=22968, per=30.39%, avg=22052.00, stdev=1295.42, samples=2 00:20:44.049 iops : min= 5284, max= 5742, avg=5513.00, stdev=323.85, samples=2 00:20:44.049 lat (usec) : 500=0.01% 00:20:44.049 lat (msec) : 4=0.56%, 10=28.44%, 20=66.94%, 50=3.48%, 100=0.59% 00:20:44.049 cpu : usr=2.79%, sys=6.09%, ctx=402, majf=0, minf=1 00:20:44.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:44.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:44.049 issued rwts: total=5129,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:44.049 job1: (groupid=0, jobs=1): err= 0: pid=1620927: Thu Jul 11 13:50:46 2024 00:20:44.049 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:20:44.049 slat (nsec): min=1503, max=22128k, avg=149120.84, stdev=938593.82 00:20:44.049 clat (usec): min=7192, max=56546, avg=19126.53, stdev=11885.78 00:20:44.049 lat (usec): min=7198, max=56552, avg=19275.65, stdev=11959.67 00:20:44.049 clat percentiles (usec): 00:20:44.049 | 1.00th=[ 7963], 5.00th=[10028], 10.00th=[10159], 20.00th=[10683], 00:20:44.049 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12911], 60.00th=[14746], 00:20:44.049 | 70.00th=[19530], 80.00th=[29754], 90.00th=[41681], 95.00th=[46400], 00:20:44.049 | 99.00th=[50070], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:20:44.049 | 99.99th=[56361] 00:20:44.049 write: IOPS=3892, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1001msec); 0 zone resets 00:20:44.049 slat (nsec): min=1831, max=9877.6k, avg=113289.47, stdev=692812.40 00:20:44.049 clat (usec): min=305, max=44327, avg=14825.61, stdev=6660.60 00:20:44.049 lat (usec): min=4299, max=44336, avg=14938.90, stdev=6696.18 00:20:44.049 clat percentiles (usec): 00:20:44.049 | 1.00th=[ 7439], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10814], 00:20:44.049 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12256], 60.00th=[12649], 00:20:44.049 | 70.00th=[13960], 80.00th=[18744], 90.00th=[25560], 95.00th=[29754], 00:20:44.049 | 99.00th=[39584], 99.50th=[39584], 99.90th=[44303], 99.95th=[44303], 00:20:44.049 | 99.99th=[44303] 00:20:44.049 bw ( KiB/s): min=19680, max=19680, per=27.12%, avg=19680.00, stdev= 0.00, samples=1 00:20:44.049 iops : min= 4920, max= 4920, avg=4920.00, stdev= 0.00, samples=1 00:20:44.049 lat (usec) : 500=0.01% 00:20:44.049 lat (msec) : 10=8.25%, 20=68.38%, 50=22.57%, 100=0.79% 00:20:44.050 cpu : usr=3.50%, sys=3.70%, ctx=329, majf=0, minf=1 00:20:44.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:44.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:44.050 issued rwts: total=3584,3896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:44.050 job2: (groupid=0, jobs=1): err= 0: pid=1620934: Thu Jul 11 13:50:46 2024 00:20:44.050 read: IOPS=4038, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1002msec) 00:20:44.050 slat (nsec): min=1421, max=15360k, avg=126586.01, stdev=834301.08 00:20:44.050 clat (usec): min=557, max=62125, avg=16064.48, stdev=7155.91 00:20:44.050 lat (usec): min=1919, max=62130, avg=16191.06, stdev=7230.40 00:20:44.050 clat percentiles (usec): 00:20:44.050 | 1.00th=[ 7308], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[11731], 00:20:44.050 | 30.00th=[12649], 40.00th=[13960], 50.00th=[15008], 60.00th=[16188], 00:20:44.050 | 70.00th=[16909], 80.00th=[18744], 90.00th=[20841], 95.00th=[25822], 00:20:44.050 | 99.00th=[57410], 99.50th=[59507], 99.90th=[62129], 99.95th=[62129], 00:20:44.050 | 99.99th=[62129] 00:20:44.050 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:20:44.050 slat (usec): min=2, max=11278, avg=106.93, stdev=720.22 00:20:44.050 clat (usec): min=1713, max=62114, avg=15060.85, stdev=5510.60 00:20:44.050 lat (usec): min=1733, max=62118, avg=15167.78, stdev=5548.30 00:20:44.050 clat percentiles (usec): 00:20:44.050 | 1.00th=[ 5800], 5.00th=[ 8848], 10.00th=[11076], 20.00th=[11994], 00:20:44.050 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13698], 60.00th=[14615], 00:20:44.050 | 70.00th=[15926], 80.00th=[17433], 90.00th=[21103], 95.00th=[23200], 00:20:44.050 | 99.00th=[41157], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:20:44.050 | 99.99th=[62129] 00:20:44.050 bw ( KiB/s): min=16384, max=16384, per=22.58%, avg=16384.00, stdev= 0.00, samples=2 00:20:44.050 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:20:44.050 lat (usec) : 750=0.01% 00:20:44.050 lat (msec) : 2=0.20%, 4=0.04%, 10=8.89%, 20=76.89%, 50=13.15% 00:20:44.050 lat (msec) : 100=0.82% 00:20:44.050 cpu : usr=4.00%, sys=4.60%, ctx=288, majf=0, minf=1 00:20:44.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:44.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:44.050 issued rwts: total=4047,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:44.050 job3: (groupid=0, jobs=1): err= 0: pid=1620935: Thu Jul 11 13:50:46 2024 00:20:44.050 read: IOPS=4301, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1005msec) 00:20:44.050 slat (nsec): min=1400, max=19811k, avg=103886.54, stdev=878122.41 00:20:44.050 clat (usec): min=3896, max=39675, avg=13537.57, stdev=5133.58 00:20:44.050 lat (usec): min=3903, max=39779, avg=13641.45, stdev=5204.72 00:20:44.050 clat percentiles (usec): 00:20:44.050 | 1.00th=[ 6259], 5.00th=[ 7767], 10.00th=[ 8848], 20.00th=[10159], 00:20:44.050 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11338], 60.00th=[12780], 00:20:44.050 | 70.00th=[15008], 80.00th=[17957], 90.00th=[20317], 95.00th=[20841], 00:20:44.050 | 99.00th=[31589], 99.50th=[33817], 99.90th=[36439], 99.95th=[36439], 00:20:44.050 | 99.99th=[39584] 00:20:44.050 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:20:44.050 slat (usec): min=2, max=14901, avg=99.16, stdev=700.31 00:20:44.050 clat (usec): min=2073, max=77851, avg=14866.08, stdev=11195.40 00:20:44.050 lat (usec): min=2086, max=80130, avg=14965.25, stdev=11267.98 00:20:44.050 clat percentiles (usec): 00:20:44.050 | 1.00th=[ 3818], 5.00th=[ 5932], 10.00th=[ 7046], 20.00th=[ 9241], 00:20:44.050 | 30.00th=[10290], 40.00th=[11207], 50.00th=[11600], 60.00th=[11994], 00:20:44.050 | 70.00th=[14222], 80.00th=[17957], 90.00th=[23200], 95.00th=[36439], 00:20:44.050 | 99.00th=[70779], 99.50th=[71828], 99.90th=[78119], 99.95th=[78119], 00:20:44.050 | 99.99th=[78119] 00:20:44.050 bw ( KiB/s): min=16064, max=20800, per=25.40%, avg=18432.00, stdev=3348.86, samples=2 00:20:44.050 iops : min= 4016, max= 5200, avg=4608.00, stdev=837.21, samples=2 00:20:44.050 lat (msec) : 4=0.71%, 10=21.40%, 20=62.88%, 50=13.35%, 100=1.67% 00:20:44.050 cpu : usr=3.78%, sys=5.98%, ctx=411, majf=0, minf=1 00:20:44.050 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:44.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:44.050 issued rwts: total=4323,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.050 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:44.050 00:20:44.050 Run status group 0 (all jobs): 00:20:44.050 READ: bw=66.4MiB/s (69.6MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=66.7MiB (70.0MB), run=1001-1005msec 00:20:44.050 WRITE: bw=70.9MiB/s (74.3MB/s), 15.2MiB/s-21.9MiB/s (15.9MB/s-23.0MB/s), io=71.2MiB (74.7MB), run=1001-1005msec 00:20:44.050 00:20:44.050 Disk stats (read/write): 00:20:44.050 nvme0n1: ios=4490/4608, merge=0/0, ticks=27759/32713, in_queue=60472, util=97.89% 00:20:44.050 nvme0n2: ios=3280/3584, merge=0/0, ticks=18325/16397, in_queue=34722, util=96.44% 00:20:44.050 nvme0n3: ios=3445/3584, merge=0/0, ticks=35285/37332, in_queue=72617, util=97.40% 00:20:44.050 nvme0n4: ios=3565/3591, merge=0/0, ticks=47823/52148, in_queue=99971, util=97.80% 00:20:44.050 13:50:46 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:44.050 [global] 00:20:44.050 thread=1 00:20:44.050 invalidate=1 00:20:44.050 rw=randwrite 00:20:44.050 time_based=1 00:20:44.050 runtime=1 00:20:44.050 ioengine=libaio 00:20:44.050 direct=1 00:20:44.050 bs=4096 00:20:44.050 iodepth=128 00:20:44.050 norandommap=0 00:20:44.050 numjobs=1 00:20:44.050 00:20:44.050 verify_dump=1 00:20:44.050 verify_backlog=512 00:20:44.050 verify_state_save=0 00:20:44.050 do_verify=1 00:20:44.050 verify=crc32c-intel 00:20:44.050 [job0] 00:20:44.050 filename=/dev/nvme0n1 00:20:44.050 [job1] 00:20:44.050 filename=/dev/nvme0n2 00:20:44.050 [job2] 00:20:44.050 filename=/dev/nvme0n3 00:20:44.050 [job3] 00:20:44.050 filename=/dev/nvme0n4 00:20:44.050 Could not set queue depth (nvme0n1) 00:20:44.050 Could not set queue depth (nvme0n2) 00:20:44.050 Could not set queue depth (nvme0n3) 00:20:44.050 Could not set queue depth (nvme0n4) 00:20:44.309 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:44.309 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:44.309 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:44.309 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:44.309 fio-3.35 00:20:44.310 Starting 4 threads 00:20:45.701 00:20:45.701 job0: (groupid=0, jobs=1): err= 0: pid=1621307: Thu Jul 11 13:50:47 2024 00:20:45.701 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:20:45.701 slat (nsec): min=1624, max=10639k, avg=91017.98, stdev=682970.39 00:20:45.701 clat (usec): min=1739, max=22549, avg=11418.17, stdev=2796.97 00:20:45.701 lat (usec): min=4064, max=24394, avg=11509.19, stdev=2846.60 00:20:45.701 clat percentiles (usec): 00:20:45.701 | 1.00th=[ 5145], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[ 9503], 00:20:45.701 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11076], 00:20:45.701 | 70.00th=[11469], 80.00th=[13304], 90.00th=[15664], 95.00th=[17433], 00:20:45.701 | 99.00th=[20317], 99.50th=[21103], 99.90th=[22152], 99.95th=[22676], 00:20:45.701 | 99.99th=[22676] 00:20:45.701 write: IOPS=5667, BW=22.1MiB/s (23.2MB/s)(22.2MiB/1005msec); 0 zone resets 00:20:45.701 slat (usec): min=2, max=35892, avg=79.24, stdev=737.44 00:20:45.701 clat (usec): min=863, max=44510, avg=11085.50, stdev=4925.88 00:20:45.701 lat (usec): min=874, max=44520, avg=11164.74, stdev=4958.15 00:20:45.701 clat percentiles (usec): 00:20:45.701 | 1.00th=[ 3785], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7635], 00:20:45.701 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:20:45.701 | 70.00th=[11994], 80.00th=[12518], 90.00th=[13173], 95.00th=[16581], 00:20:45.701 | 99.00th=[37487], 99.50th=[37487], 99.90th=[44303], 99.95th=[44303], 00:20:45.701 | 99.99th=[44303] 00:20:45.701 bw ( KiB/s): min=20496, max=24560, per=29.06%, avg=22528.00, stdev=2873.68, samples=2 00:20:45.701 iops : min= 5124, max= 6140, avg=5632.00, stdev=718.42, samples=2 00:20:45.701 lat (usec) : 1000=0.03% 00:20:45.701 lat (msec) : 2=0.27%, 4=0.43%, 10=30.87%, 20=66.54%, 50=1.85% 00:20:45.701 cpu : usr=3.88%, sys=5.78%, ctx=496, majf=0, minf=1 00:20:45.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:45.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:45.701 issued rwts: total=5632,5696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.701 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:45.701 job1: (groupid=0, jobs=1): err= 0: pid=1621308: Thu Jul 11 13:50:47 2024 00:20:45.701 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:20:45.701 slat (nsec): min=1123, max=32275k, avg=116543.67, stdev=1033919.49 00:20:45.701 clat (msec): min=5, max=120, avg=16.48, stdev=14.21 00:20:45.701 lat (msec): min=5, max=120, avg=16.60, stdev=14.28 00:20:45.701 clat percentiles (msec): 00:20:45.701 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:20:45.701 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:20:45.701 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 23], 95.00th=[ 48], 00:20:45.701 | 99.00th=[ 89], 99.50th=[ 89], 99.90th=[ 89], 99.95th=[ 122], 00:20:45.701 | 99.99th=[ 122] 00:20:45.701 write: IOPS=4546, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1004msec); 0 zone resets 00:20:45.701 slat (nsec): min=1873, max=12637k, avg=96291.45, stdev=578835.25 00:20:45.701 clat (usec): min=1099, max=34155, avg=13111.24, stdev=4149.02 00:20:45.701 lat (usec): min=1119, max=34164, avg=13207.53, stdev=4161.95 00:20:45.701 clat percentiles (usec): 00:20:45.701 | 1.00th=[ 3818], 5.00th=[ 6259], 10.00th=[ 7963], 20.00th=[10945], 00:20:45.701 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[13304], 00:20:45.701 | 70.00th=[13960], 80.00th=[14615], 90.00th=[18482], 95.00th=[22152], 00:20:45.701 | 99.00th=[25560], 99.50th=[26346], 99.90th=[29492], 99.95th=[29492], 00:20:45.701 | 99.99th=[34341] 00:20:45.701 bw ( KiB/s): min=15792, max=19712, per=22.90%, avg=17752.00, stdev=2771.86, samples=2 00:20:45.701 iops : min= 3948, max= 4928, avg=4438.00, stdev=692.96, samples=2 00:20:45.701 lat (msec) : 2=0.14%, 4=0.52%, 10=10.84%, 20=79.77%, 50=6.77% 00:20:45.701 lat (msec) : 100=1.93%, 250=0.03% 00:20:45.701 cpu : usr=2.79%, sys=4.49%, ctx=500, majf=0, minf=1 00:20:45.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:45.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:45.701 issued rwts: total=4096,4565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.701 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:45.701 job2: (groupid=0, jobs=1): err= 0: pid=1621309: Thu Jul 11 13:50:47 2024 00:20:45.701 read: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(17.3MiB/1004msec) 00:20:45.701 slat (nsec): min=1087, max=14145k, avg=98068.09, stdev=618258.96 00:20:45.701 clat (usec): min=1773, max=55918, avg=13648.65, stdev=5644.27 00:20:45.701 lat (usec): min=5923, max=55926, avg=13746.72, stdev=5655.24 00:20:45.701 clat percentiles (usec): 00:20:45.701 | 1.00th=[ 7898], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10945], 00:20:45.701 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13042], 60.00th=[13304], 00:20:45.701 | 70.00th=[13566], 80.00th=[14353], 90.00th=[15533], 95.00th=[19006], 00:20:45.701 | 99.00th=[42730], 99.50th=[49546], 99.90th=[55837], 99.95th=[55837], 00:20:45.701 | 99.99th=[55837] 00:20:45.701 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:20:45.701 slat (nsec): min=1897, max=27000k, avg=113941.71, stdev=817745.51 00:20:45.701 clat (usec): min=6245, max=62721, avg=14437.11, stdev=5623.13 00:20:45.701 lat (usec): min=6505, max=62773, avg=14551.05, stdev=5694.50 00:20:45.701 clat percentiles (usec): 00:20:45.701 | 1.00th=[ 7046], 5.00th=[ 7701], 10.00th=[ 9765], 20.00th=[11731], 00:20:45.701 | 30.00th=[12649], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:20:45.701 | 70.00th=[14353], 80.00th=[14746], 90.00th=[17171], 95.00th=[26608], 00:20:45.701 | 99.00th=[35914], 99.50th=[43779], 99.90th=[48497], 99.95th=[48497], 00:20:45.701 | 99.99th=[62653] 00:20:45.701 bw ( KiB/s): min=17896, max=18968, per=23.78%, avg=18432.00, stdev=758.02, samples=2 00:20:45.701 iops : min= 4474, max= 4742, avg=4608.00, stdev=189.50, samples=2 00:20:45.701 lat (msec) : 2=0.01%, 10=11.46%, 20=81.99%, 50=6.46%, 100=0.08% 00:20:45.701 cpu : usr=3.29%, sys=4.09%, ctx=465, majf=0, minf=1 00:20:45.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:45.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:45.701 issued rwts: total=4431,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.701 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:45.701 job3: (groupid=0, jobs=1): err= 0: pid=1621310: Thu Jul 11 13:50:47 2024 00:20:45.701 read: IOPS=4518, BW=17.7MiB/s (18.5MB/s)(17.7MiB/1004msec) 00:20:45.701 slat (nsec): min=1396, max=11773k, avg=105543.91, stdev=728206.85 00:20:45.701 clat (usec): min=956, max=26120, avg=13117.97, stdev=2998.25 00:20:45.701 lat (usec): min=3351, max=26127, avg=13223.51, stdev=3049.41 00:20:45.701 clat percentiles (usec): 00:20:45.701 | 1.00th=[ 6259], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10945], 00:20:45.701 | 30.00th=[11994], 40.00th=[12649], 50.00th=[12911], 60.00th=[13304], 00:20:45.701 | 70.00th=[13566], 80.00th=[13960], 90.00th=[17171], 95.00th=[19006], 00:20:45.701 | 99.00th=[23200], 99.50th=[23987], 99.90th=[26084], 99.95th=[26084], 00:20:45.701 | 99.99th=[26084] 00:20:45.701 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:20:45.701 slat (usec): min=2, max=18392, avg=106.03, stdev=685.76 00:20:45.701 clat (usec): min=1672, max=38708, avg=14704.04, stdev=5306.42 00:20:45.701 lat (usec): min=1684, max=38714, avg=14810.07, stdev=5344.96 00:20:45.701 clat percentiles (usec): 00:20:45.702 | 1.00th=[ 3982], 5.00th=[ 8029], 10.00th=[10552], 20.00th=[11863], 00:20:45.702 | 30.00th=[12518], 40.00th=[13566], 50.00th=[13829], 60.00th=[13960], 00:20:45.702 | 70.00th=[14615], 80.00th=[16581], 90.00th=[20841], 95.00th=[25297], 00:20:45.702 | 99.00th=[35390], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:20:45.702 | 99.99th=[38536] 00:20:45.702 bw ( KiB/s): min=17616, max=19248, per=23.78%, avg=18432.00, stdev=1154.00, samples=2 00:20:45.702 iops : min= 4404, max= 4812, avg=4608.00, stdev=288.50, samples=2 00:20:45.702 lat (usec) : 1000=0.01% 00:20:45.702 lat (msec) : 2=0.12%, 4=0.51%, 10=7.69%, 20=83.93%, 50=7.74% 00:20:45.702 cpu : usr=3.89%, sys=5.98%, ctx=451, majf=0, minf=1 00:20:45.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:20:45.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:45.702 issued rwts: total=4537,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.702 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:45.702 00:20:45.702 Run status group 0 (all jobs): 00:20:45.702 READ: bw=72.7MiB/s (76.2MB/s), 15.9MiB/s-21.9MiB/s (16.7MB/s-23.0MB/s), io=73.0MiB (76.6MB), run=1004-1005msec 00:20:45.702 WRITE: bw=75.7MiB/s (79.4MB/s), 17.8MiB/s-22.1MiB/s (18.6MB/s-23.2MB/s), io=76.1MiB (79.8MB), run=1004-1005msec 00:20:45.702 00:20:45.702 Disk stats (read/write): 00:20:45.702 nvme0n1: ios=4631/4745, merge=0/0, ticks=52190/47357, in_queue=99547, util=96.19% 00:20:45.702 nvme0n2: ios=3483/3584, merge=0/0, ticks=29882/24159, in_queue=54041, util=96.32% 00:20:45.702 nvme0n3: ios=3640/3963, merge=0/0, ticks=22711/28379, in_queue=51090, util=95.66% 00:20:45.702 nvme0n4: ios=3641/3953, merge=0/0, ticks=36748/42351, in_queue=79099, util=96.25% 00:20:45.702 13:50:47 -- target/fio.sh@55 -- # sync 00:20:45.702 13:50:47 -- target/fio.sh@59 -- # fio_pid=1621540 00:20:45.702 13:50:47 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:45.702 13:50:47 -- target/fio.sh@61 -- # sleep 3 00:20:45.702 [global] 00:20:45.702 thread=1 00:20:45.702 invalidate=1 00:20:45.702 rw=read 00:20:45.702 time_based=1 00:20:45.702 runtime=10 00:20:45.702 ioengine=libaio 00:20:45.702 direct=1 00:20:45.702 bs=4096 00:20:45.702 iodepth=1 00:20:45.702 norandommap=1 00:20:45.702 numjobs=1 00:20:45.702 00:20:45.702 [job0] 00:20:45.702 filename=/dev/nvme0n1 00:20:45.702 [job1] 00:20:45.702 filename=/dev/nvme0n2 00:20:45.702 [job2] 00:20:45.702 filename=/dev/nvme0n3 00:20:45.702 [job3] 00:20:45.702 filename=/dev/nvme0n4 00:20:45.702 Could not set queue depth (nvme0n1) 00:20:45.702 Could not set queue depth (nvme0n2) 00:20:45.702 Could not set queue depth (nvme0n3) 00:20:45.702 Could not set queue depth (nvme0n4) 00:20:45.959 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:45.960 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:45.960 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:45.960 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:45.960 fio-3.35 00:20:45.960 Starting 4 threads 00:20:48.482 13:50:50 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:48.738 13:50:51 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:48.738 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=274432, buflen=4096 00:20:48.738 fio: pid=1621712, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:48.995 13:50:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:48.995 13:50:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:48.995 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=10420224, buflen=4096 00:20:48.995 fio: pid=1621707, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:48.995 13:50:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:48.995 13:50:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:48.995 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=344064, buflen=4096 00:20:48.995 fio: pid=1621685, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:49.253 13:50:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:49.253 13:50:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:49.253 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=339968, buflen=4096 00:20:49.253 fio: pid=1621691, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:49.253 00:20:49.253 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1621685: Thu Jul 11 13:50:51 2024 00:20:49.253 read: IOPS=28, BW=111KiB/s (114kB/s)(336KiB/3029msec) 00:20:49.253 slat (usec): min=5, max=31742, avg=507.93, stdev=3589.54 00:20:49.253 clat (usec): min=363, max=42001, avg=35290.10, stdev=14253.04 00:20:49.253 lat (usec): min=376, max=72944, avg=35803.80, stdev=14898.57 00:20:49.253 clat percentiles (usec): 00:20:49.253 | 1.00th=[ 363], 5.00th=[ 490], 10.00th=[ 611], 20.00th=[41157], 00:20:49.253 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:49.253 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:49.253 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:49.253 | 99.99th=[42206] 00:20:49.253 bw ( KiB/s): min= 96, max= 136, per=3.34%, avg=115.20, stdev=16.59, samples=5 00:20:49.253 iops : min= 24, max= 34, avg=28.80, stdev= 4.15, samples=5 00:20:49.253 lat (usec) : 500=5.88%, 750=7.06% 00:20:49.253 lat (msec) : 2=1.18%, 50=84.71% 00:20:49.253 cpu : usr=0.13%, sys=0.00%, ctx=88, majf=0, minf=1 00:20:49.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.253 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.253 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.253 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1621691: Thu Jul 11 13:50:51 2024 00:20:49.253 read: IOPS=26, BW=103KiB/s (105kB/s)(332KiB/3224msec) 00:20:49.253 slat (usec): min=8, max=30590, avg=383.89, stdev=3335.53 00:20:49.253 clat (usec): min=394, max=42009, avg=38198.52, stdev=10576.05 00:20:49.253 lat (usec): min=405, max=71833, avg=38586.77, stdev=11196.35 00:20:49.253 clat percentiles (usec): 00:20:49.253 | 1.00th=[ 396], 5.00th=[ 586], 10.00th=[40633], 20.00th=[41157], 00:20:49.253 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:49.253 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:20:49.253 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:49.253 | 99.99th=[42206] 00:20:49.253 bw ( KiB/s): min= 94, max= 128, per=2.99%, avg=103.67, stdev=12.68, samples=6 00:20:49.253 iops : min= 23, max= 32, avg=25.83, stdev= 3.25, samples=6 00:20:49.253 lat (usec) : 500=1.19%, 750=5.95% 00:20:49.253 lat (msec) : 50=91.67% 00:20:49.253 cpu : usr=0.09%, sys=0.00%, ctx=88, majf=0, minf=1 00:20:49.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.253 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.253 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.253 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1621707: Thu Jul 11 13:50:51 2024 00:20:49.253 read: IOPS=900, BW=3601KiB/s (3687kB/s)(9.94MiB/2826msec) 00:20:49.253 slat (nsec): min=6238, max=77199, avg=7525.97, stdev=2801.61 00:20:49.253 clat (usec): min=243, max=42003, avg=1093.64, stdev=5607.32 00:20:49.253 lat (usec): min=250, max=42025, avg=1101.16, stdev=5609.51 00:20:49.253 clat percentiles (usec): 00:20:49.253 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 297], 00:20:49.253 | 30.00th=[ 302], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 310], 00:20:49.253 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 334], 00:20:49.253 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:20:49.253 | 99.99th=[42206] 00:20:49.253 bw ( KiB/s): min= 96, max=12648, per=100.00%, avg=4057.60, stdev=5738.70, samples=5 00:20:49.253 iops : min= 24, max= 3162, avg=1014.40, stdev=1434.68, samples=5 00:20:49.253 lat (usec) : 250=0.08%, 500=97.88%, 750=0.04% 00:20:49.253 lat (msec) : 4=0.04%, 50=1.93% 00:20:49.253 cpu : usr=0.35%, sys=0.74%, ctx=2546, majf=0, minf=1 00:20:49.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.253 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.253 issued rwts: total=2545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.253 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1621712: Thu Jul 11 13:50:51 2024 00:20:49.253 read: IOPS=25, BW=101KiB/s (103kB/s)(268KiB/2657msec) 00:20:49.253 slat (nsec): min=8851, max=38315, avg=22265.81, stdev=4462.33 00:20:49.253 clat (usec): min=353, max=42187, avg=39298.29, stdev=8465.98 00:20:49.253 lat (usec): min=391, max=42196, avg=39320.55, stdev=8464.63 00:20:49.253 clat percentiles (usec): 00:20:49.253 | 1.00th=[ 355], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:20:49.253 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:49.253 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:20:49.253 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:49.253 | 99.99th=[42206] 00:20:49.253 bw ( KiB/s): min= 96, max= 112, per=2.90%, avg=100.80, stdev= 7.16, samples=5 00:20:49.253 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:20:49.253 lat (usec) : 500=2.94%, 750=1.47% 00:20:49.253 lat (msec) : 50=94.12% 00:20:49.253 cpu : usr=0.00%, sys=0.08%, ctx=72, majf=0, minf=2 00:20:49.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.253 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.253 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.253 00:20:49.253 Run status group 0 (all jobs): 00:20:49.253 READ: bw=3447KiB/s (3529kB/s), 101KiB/s-3601KiB/s (103kB/s-3687kB/s), io=10.9MiB (11.4MB), run=2657-3224msec 00:20:49.253 00:20:49.253 Disk stats (read/write): 00:20:49.253 nvme0n1: ios=79/0, merge=0/0, ticks=2759/0, in_queue=2759, util=92.65% 00:20:49.254 nvme0n2: ios=79/0, merge=0/0, ticks=3008/0, in_queue=3008, util=93.97% 00:20:49.254 nvme0n3: ios=2544/0, merge=0/0, ticks=2764/0, in_queue=2764, util=96.16% 00:20:49.254 nvme0n4: ios=107/0, merge=0/0, ticks=2866/0, in_queue=2866, util=100.00% 00:20:49.511 13:50:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:49.511 13:50:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:49.511 13:50:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:49.511 13:50:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:49.769 13:50:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:49.769 13:50:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:50.026 13:50:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:50.026 13:50:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:50.284 13:50:52 -- target/fio.sh@69 -- # fio_status=0 00:20:50.284 13:50:52 -- target/fio.sh@70 -- # wait 1621540 00:20:50.284 13:50:52 -- target/fio.sh@70 -- # fio_status=4 00:20:50.284 13:50:52 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:50.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:50.284 13:50:52 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:50.284 13:50:52 -- common/autotest_common.sh@1198 -- # local i=0 00:20:50.284 13:50:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:50.284 13:50:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:50.284 13:50:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:50.284 13:50:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:50.284 13:50:52 -- common/autotest_common.sh@1210 -- # return 0 00:20:50.284 13:50:52 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:50.284 13:50:52 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:50.284 nvmf hotplug test: fio failed as expected 00:20:50.284 13:50:52 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:50.541 13:50:52 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:50.541 13:50:52 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:50.541 13:50:52 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:50.541 13:50:52 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:50.541 13:50:52 -- target/fio.sh@91 -- # nvmftestfini 00:20:50.541 13:50:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:50.541 13:50:52 -- nvmf/common.sh@116 -- # sync 00:20:50.541 13:50:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:50.541 13:50:52 -- nvmf/common.sh@119 -- # set +e 00:20:50.541 13:50:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:50.541 13:50:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:50.541 rmmod nvme_tcp 00:20:50.541 rmmod nvme_fabrics 00:20:50.541 rmmod nvme_keyring 00:20:50.541 13:50:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:50.541 13:50:52 -- nvmf/common.sh@123 -- # set -e 00:20:50.541 13:50:52 -- nvmf/common.sh@124 -- # return 0 00:20:50.541 13:50:52 -- nvmf/common.sh@477 -- # '[' -n 1618802 ']' 00:20:50.541 13:50:52 -- nvmf/common.sh@478 -- # killprocess 1618802 00:20:50.541 13:50:52 -- common/autotest_common.sh@926 -- # '[' -z 1618802 ']' 00:20:50.541 13:50:52 -- common/autotest_common.sh@930 -- # kill -0 1618802 00:20:50.541 13:50:52 -- common/autotest_common.sh@931 -- # uname 00:20:50.541 13:50:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:50.541 13:50:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1618802 00:20:50.541 13:50:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:50.541 13:50:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:50.541 13:50:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1618802' 00:20:50.541 killing process with pid 1618802 00:20:50.541 13:50:52 -- common/autotest_common.sh@945 -- # kill 1618802 00:20:50.541 13:50:52 -- common/autotest_common.sh@950 -- # wait 1618802 00:20:50.798 13:50:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:50.798 13:50:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:50.798 13:50:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:50.798 13:50:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.798 13:50:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:50.798 13:50:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.798 13:50:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.798 13:50:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.327 13:50:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:53.327 00:20:53.327 real 0m25.852s 00:20:53.327 user 1m45.668s 00:20:53.327 sys 0m7.274s 00:20:53.327 13:50:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:53.327 13:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:53.327 ************************************ 00:20:53.327 END TEST nvmf_fio_target 00:20:53.327 ************************************ 00:20:53.327 13:50:55 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:53.327 13:50:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:53.327 13:50:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:53.327 13:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:53.327 ************************************ 00:20:53.327 START TEST nvmf_bdevio 00:20:53.327 ************************************ 00:20:53.328 13:50:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:53.328 * Looking for test storage... 00:20:53.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:53.328 13:50:55 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.328 13:50:55 -- nvmf/common.sh@7 -- # uname -s 00:20:53.328 13:50:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.328 13:50:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.328 13:50:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.328 13:50:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.328 13:50:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.328 13:50:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.328 13:50:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.328 13:50:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.328 13:50:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.328 13:50:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.328 13:50:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:53.328 13:50:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:53.328 13:50:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.328 13:50:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.328 13:50:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.328 13:50:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.328 13:50:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.328 13:50:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.328 13:50:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.328 13:50:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.328 13:50:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.328 13:50:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.328 13:50:55 -- paths/export.sh@5 -- # export PATH 00:20:53.328 13:50:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.328 13:50:55 -- nvmf/common.sh@46 -- # : 0 00:20:53.328 13:50:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:53.328 13:50:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:53.328 13:50:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:53.328 13:50:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.328 13:50:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.328 13:50:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:53.328 13:50:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:53.328 13:50:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:53.328 13:50:55 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:53.328 13:50:55 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:53.328 13:50:55 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:53.328 13:50:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:53.328 13:50:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.328 13:50:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:53.328 13:50:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:53.328 13:50:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:53.328 13:50:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.328 13:50:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.328 13:50:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.328 13:50:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:53.328 13:50:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:53.328 13:50:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:53.328 13:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:58.596 13:51:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:58.596 13:51:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:58.596 13:51:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:58.596 13:51:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:58.596 13:51:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:58.596 13:51:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:58.596 13:51:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:58.596 13:51:00 -- nvmf/common.sh@294 -- # net_devs=() 00:20:58.596 13:51:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:58.596 13:51:00 -- nvmf/common.sh@295 -- # e810=() 00:20:58.596 13:51:00 -- nvmf/common.sh@295 -- # local -ga e810 00:20:58.596 13:51:00 -- nvmf/common.sh@296 -- # x722=() 00:20:58.596 13:51:00 -- nvmf/common.sh@296 -- # local -ga x722 00:20:58.596 13:51:00 -- nvmf/common.sh@297 -- # mlx=() 00:20:58.596 13:51:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:58.596 13:51:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.596 13:51:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.596 13:51:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.596 13:51:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.596 13:51:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.596 13:51:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.596 13:51:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.596 13:51:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.596 13:51:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.596 13:51:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.596 13:51:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.596 13:51:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:58.596 13:51:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:58.596 13:51:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:58.596 13:51:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:58.596 13:51:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:58.596 13:51:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:58.596 13:51:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:58.596 13:51:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:58.596 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:58.596 13:51:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:58.597 13:51:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:58.597 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:58.597 13:51:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:58.597 13:51:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:58.597 13:51:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.597 13:51:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:58.597 13:51:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.597 13:51:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:58.597 Found net devices under 0000:86:00.0: cvl_0_0 00:20:58.597 13:51:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.597 13:51:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:58.597 13:51:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.597 13:51:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:58.597 13:51:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.597 13:51:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:58.597 Found net devices under 0000:86:00.1: cvl_0_1 00:20:58.597 13:51:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.597 13:51:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:58.597 13:51:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:58.597 13:51:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:58.597 13:51:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.597 13:51:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.597 13:51:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.597 13:51:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:58.597 13:51:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.597 13:51:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.597 13:51:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:58.597 13:51:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.597 13:51:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.597 13:51:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:58.597 13:51:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:58.597 13:51:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.597 13:51:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.597 13:51:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.597 13:51:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.597 13:51:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:58.597 13:51:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.597 13:51:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.597 13:51:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.597 13:51:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:58.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:20:58.597 00:20:58.597 --- 10.0.0.2 ping statistics --- 00:20:58.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.597 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:20:58.597 13:51:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:20:58.597 00:20:58.597 --- 10.0.0.1 ping statistics --- 00:20:58.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.597 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:20:58.597 13:51:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.597 13:51:00 -- nvmf/common.sh@410 -- # return 0 00:20:58.597 13:51:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:58.597 13:51:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.597 13:51:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:58.597 13:51:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.597 13:51:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:58.597 13:51:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:58.597 13:51:00 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:58.597 13:51:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:58.597 13:51:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:58.597 13:51:00 -- common/autotest_common.sh@10 -- # set +x 00:20:58.597 13:51:00 -- nvmf/common.sh@469 -- # nvmfpid=1625988 00:20:58.597 13:51:00 -- nvmf/common.sh@470 -- # waitforlisten 1625988 00:20:58.597 13:51:00 -- common/autotest_common.sh@819 -- # '[' -z 1625988 ']' 00:20:58.597 13:51:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.597 13:51:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:58.597 13:51:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.597 13:51:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:58.597 13:51:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:58.597 13:51:00 -- common/autotest_common.sh@10 -- # set +x 00:20:58.597 [2024-07-11 13:51:00.735464] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:58.597 [2024-07-11 13:51:00.735505] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.597 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.597 [2024-07-11 13:51:00.792652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:58.597 [2024-07-11 13:51:00.831662] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:58.597 [2024-07-11 13:51:00.831772] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.597 [2024-07-11 13:51:00.831780] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.597 [2024-07-11 13:51:00.831786] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.597 [2024-07-11 13:51:00.831892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:58.597 [2024-07-11 13:51:00.832000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:58.597 [2024-07-11 13:51:00.832108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.597 [2024-07-11 13:51:00.832109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:59.165 13:51:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:59.165 13:51:01 -- common/autotest_common.sh@852 -- # return 0 00:20:59.165 13:51:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:59.165 13:51:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:59.165 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:59.165 13:51:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.165 13:51:01 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:59.165 13:51:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:59.165 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:59.165 [2024-07-11 13:51:01.564544] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.165 13:51:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:59.165 13:51:01 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:59.165 13:51:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:59.165 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:59.165 Malloc0 00:20:59.165 13:51:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:59.165 13:51:01 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:59.165 13:51:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:59.165 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:59.165 13:51:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:59.165 13:51:01 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:59.165 13:51:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:59.165 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:59.165 13:51:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:59.165 13:51:01 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.165 13:51:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:59.165 13:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:59.165 [2024-07-11 13:51:01.607824] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.165 13:51:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:59.165 13:51:01 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:59.165 13:51:01 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:59.165 13:51:01 -- nvmf/common.sh@520 -- # config=() 00:20:59.165 13:51:01 -- nvmf/common.sh@520 -- # local subsystem config 00:20:59.165 13:51:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:59.165 13:51:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:59.165 { 00:20:59.165 "params": { 00:20:59.165 "name": "Nvme$subsystem", 00:20:59.165 "trtype": "$TEST_TRANSPORT", 00:20:59.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.165 "adrfam": "ipv4", 00:20:59.165 "trsvcid": "$NVMF_PORT", 00:20:59.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.165 "hdgst": ${hdgst:-false}, 00:20:59.165 "ddgst": ${ddgst:-false} 00:20:59.165 }, 00:20:59.165 "method": "bdev_nvme_attach_controller" 00:20:59.165 } 00:20:59.165 EOF 00:20:59.165 )") 00:20:59.165 13:51:01 -- nvmf/common.sh@542 -- # cat 00:20:59.424 13:51:01 -- nvmf/common.sh@544 -- # jq . 00:20:59.424 13:51:01 -- nvmf/common.sh@545 -- # IFS=, 00:20:59.424 13:51:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:59.424 "params": { 00:20:59.424 "name": "Nvme1", 00:20:59.424 "trtype": "tcp", 00:20:59.424 "traddr": "10.0.0.2", 00:20:59.424 "adrfam": "ipv4", 00:20:59.424 "trsvcid": "4420", 00:20:59.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.424 "hdgst": false, 00:20:59.424 "ddgst": false 00:20:59.424 }, 00:20:59.424 "method": "bdev_nvme_attach_controller" 00:20:59.424 }' 00:20:59.424 [2024-07-11 13:51:01.651642] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:59.424 [2024-07-11 13:51:01.651683] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1626300 ] 00:20:59.425 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.425 [2024-07-11 13:51:01.706177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:59.425 [2024-07-11 13:51:01.745866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.425 [2024-07-11 13:51:01.745961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.425 [2024-07-11 13:51:01.745963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.683 [2024-07-11 13:51:01.968619] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:59.683 [2024-07-11 13:51:01.968650] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:59.683 I/O targets: 00:20:59.683 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:59.683 00:20:59.683 00:20:59.683 CUnit - A unit testing framework for C - Version 2.1-3 00:20:59.683 http://cunit.sourceforge.net/ 00:20:59.683 00:20:59.683 00:20:59.683 Suite: bdevio tests on: Nvme1n1 00:20:59.683 Test: blockdev write read block ...passed 00:20:59.683 Test: blockdev write zeroes read block ...passed 00:20:59.683 Test: blockdev write zeroes read no split ...passed 00:20:59.683 Test: blockdev write zeroes read split ...passed 00:20:59.977 Test: blockdev write zeroes read split partial ...passed 00:20:59.977 Test: blockdev reset ...[2024-07-11 13:51:02.173946] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:59.977 [2024-07-11 13:51:02.174004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5fbb60 (9): Bad file descriptor 00:20:59.977 [2024-07-11 13:51:02.229958] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:59.977 passed 00:20:59.977 Test: blockdev write read 8 blocks ...passed 00:20:59.977 Test: blockdev write read size > 128k ...passed 00:20:59.977 Test: blockdev write read invalid size ...passed 00:20:59.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:59.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:59.977 Test: blockdev write read max offset ...passed 00:20:59.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:59.977 Test: blockdev writev readv 8 blocks ...passed 00:20:59.977 Test: blockdev writev readv 30 x 1block ...passed 00:20:59.977 Test: blockdev writev readv block ...passed 00:20:59.977 Test: blockdev writev readv size > 128k ...passed 00:20:59.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:59.977 Test: blockdev comparev and writev ...[2024-07-11 13:51:02.404523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:59.977 [2024-07-11 13:51:02.404550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:59.977 [2024-07-11 13:51:02.404564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:59.977 [2024-07-11 13:51:02.404572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:59.977 [2024-07-11 13:51:02.404861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:59.977 [2024-07-11 13:51:02.404871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:59.977 [2024-07-11 13:51:02.404882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:59.977 [2024-07-11 13:51:02.404889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:59.977 [2024-07-11 13:51:02.405166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:59.977 [2024-07-11 13:51:02.405177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:59.977 [2024-07-11 13:51:02.405189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:59.977 [2024-07-11 13:51:02.405196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:59.977 [2024-07-11 13:51:02.405472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:59.977 [2024-07-11 13:51:02.405482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:59.977 [2024-07-11 13:51:02.405494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:59.977 [2024-07-11 13:51:02.405501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:00.237 passed 00:21:00.237 Test: blockdev nvme passthru rw ...passed 00:21:00.237 Test: blockdev nvme passthru vendor specific ...[2024-07-11 13:51:02.487568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:00.237 [2024-07-11 13:51:02.487583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:00.237 [2024-07-11 13:51:02.487731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:00.237 [2024-07-11 13:51:02.487740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:00.237 [2024-07-11 13:51:02.487887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:00.237 [2024-07-11 13:51:02.487896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:00.237 [2024-07-11 13:51:02.488044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:00.237 [2024-07-11 13:51:02.488053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:00.237 passed 00:21:00.237 Test: blockdev nvme admin passthru ...passed 00:21:00.237 Test: blockdev copy ...passed 00:21:00.237 00:21:00.237 Run Summary: Type Total Ran Passed Failed Inactive 00:21:00.237 suites 1 1 n/a 0 0 00:21:00.237 tests 23 23 23 0 0 00:21:00.237 asserts 152 152 152 0 n/a 00:21:00.237 00:21:00.237 Elapsed time = 1.153 seconds 00:21:00.496 13:51:02 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:00.496 13:51:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:00.496 13:51:02 -- common/autotest_common.sh@10 -- # set +x 00:21:00.496 13:51:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:00.496 13:51:02 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:00.496 13:51:02 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:00.496 13:51:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:00.496 13:51:02 -- nvmf/common.sh@116 -- # sync 00:21:00.496 13:51:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:00.496 13:51:02 -- nvmf/common.sh@119 -- # set +e 00:21:00.496 13:51:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:00.496 13:51:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:00.496 rmmod nvme_tcp 00:21:00.496 rmmod nvme_fabrics 00:21:00.496 rmmod nvme_keyring 00:21:00.496 13:51:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:00.496 13:51:02 -- nvmf/common.sh@123 -- # set -e 00:21:00.496 13:51:02 -- nvmf/common.sh@124 -- # return 0 00:21:00.496 13:51:02 -- nvmf/common.sh@477 -- # '[' -n 1625988 ']' 00:21:00.496 13:51:02 -- nvmf/common.sh@478 -- # killprocess 1625988 00:21:00.496 13:51:02 -- common/autotest_common.sh@926 -- # '[' -z 1625988 ']' 00:21:00.496 13:51:02 -- common/autotest_common.sh@930 -- # kill -0 1625988 00:21:00.496 13:51:02 -- common/autotest_common.sh@931 -- # uname 00:21:00.496 13:51:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:00.496 13:51:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1625988 00:21:00.496 13:51:02 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:00.496 13:51:02 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:00.496 13:51:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1625988' 00:21:00.496 killing process with pid 1625988 00:21:00.496 13:51:02 -- common/autotest_common.sh@945 -- # kill 1625988 00:21:00.496 13:51:02 -- common/autotest_common.sh@950 -- # wait 1625988 00:21:00.755 13:51:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:00.755 13:51:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:00.756 13:51:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:00.756 13:51:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.756 13:51:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:00.756 13:51:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.756 13:51:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.756 13:51:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.660 13:51:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:02.660 00:21:02.660 real 0m9.844s 00:21:02.660 user 0m12.323s 00:21:02.660 sys 0m4.495s 00:21:02.660 13:51:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:02.660 13:51:05 -- common/autotest_common.sh@10 -- # set +x 00:21:02.660 ************************************ 00:21:02.660 END TEST nvmf_bdevio 00:21:02.660 ************************************ 00:21:02.660 13:51:05 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:21:02.660 13:51:05 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:02.660 13:51:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:21:02.660 13:51:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:02.660 13:51:05 -- common/autotest_common.sh@10 -- # set +x 00:21:02.660 ************************************ 00:21:02.660 START TEST nvmf_bdevio_no_huge 00:21:02.660 ************************************ 00:21:02.660 13:51:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:02.918 * Looking for test storage... 00:21:02.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:02.918 13:51:05 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.918 13:51:05 -- nvmf/common.sh@7 -- # uname -s 00:21:02.918 13:51:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.918 13:51:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.918 13:51:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.918 13:51:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.918 13:51:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.918 13:51:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.918 13:51:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.918 13:51:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.918 13:51:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.918 13:51:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.919 13:51:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:02.919 13:51:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:02.919 13:51:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.919 13:51:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.919 13:51:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.919 13:51:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.919 13:51:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.919 13:51:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.919 13:51:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.919 13:51:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.919 13:51:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.919 13:51:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.919 13:51:05 -- paths/export.sh@5 -- # export PATH 00:21:02.919 13:51:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.919 13:51:05 -- nvmf/common.sh@46 -- # : 0 00:21:02.919 13:51:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:02.919 13:51:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:02.919 13:51:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:02.919 13:51:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.919 13:51:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.919 13:51:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:02.919 13:51:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:02.919 13:51:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:02.919 13:51:05 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:02.919 13:51:05 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:02.919 13:51:05 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:02.919 13:51:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:02.919 13:51:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.919 13:51:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:02.919 13:51:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:02.919 13:51:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:02.919 13:51:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.919 13:51:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.919 13:51:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.919 13:51:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:02.919 13:51:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:02.919 13:51:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:02.919 13:51:05 -- common/autotest_common.sh@10 -- # set +x 00:21:08.189 13:51:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:08.189 13:51:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:08.189 13:51:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:08.189 13:51:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:08.189 13:51:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:08.189 13:51:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:08.189 13:51:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:08.189 13:51:10 -- nvmf/common.sh@294 -- # net_devs=() 00:21:08.189 13:51:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:08.189 13:51:10 -- nvmf/common.sh@295 -- # e810=() 00:21:08.189 13:51:10 -- nvmf/common.sh@295 -- # local -ga e810 00:21:08.189 13:51:10 -- nvmf/common.sh@296 -- # x722=() 00:21:08.189 13:51:10 -- nvmf/common.sh@296 -- # local -ga x722 00:21:08.189 13:51:10 -- nvmf/common.sh@297 -- # mlx=() 00:21:08.189 13:51:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:08.189 13:51:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.189 13:51:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.189 13:51:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.189 13:51:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.189 13:51:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.189 13:51:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.189 13:51:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.189 13:51:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.189 13:51:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.189 13:51:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.189 13:51:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.189 13:51:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:08.189 13:51:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:08.189 13:51:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:08.189 13:51:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:08.189 13:51:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:08.189 13:51:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:08.189 13:51:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:08.189 13:51:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:08.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:08.189 13:51:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:08.189 13:51:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:08.189 13:51:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.189 13:51:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.189 13:51:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:08.189 13:51:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:08.189 13:51:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:08.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:08.190 13:51:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:08.190 13:51:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:08.190 13:51:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.190 13:51:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.190 13:51:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:08.190 13:51:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:08.190 13:51:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:08.190 13:51:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:08.190 13:51:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:08.190 13:51:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.190 13:51:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:08.190 13:51:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.190 13:51:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:08.190 Found net devices under 0000:86:00.0: cvl_0_0 00:21:08.190 13:51:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.190 13:51:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:08.190 13:51:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.190 13:51:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:08.190 13:51:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.190 13:51:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:08.190 Found net devices under 0000:86:00.1: cvl_0_1 00:21:08.190 13:51:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.190 13:51:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:08.190 13:51:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:08.190 13:51:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:08.190 13:51:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:08.190 13:51:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:08.190 13:51:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.190 13:51:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.190 13:51:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.190 13:51:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:08.190 13:51:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.190 13:51:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.190 13:51:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:08.190 13:51:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.190 13:51:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.190 13:51:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:08.190 13:51:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:08.190 13:51:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.190 13:51:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.190 13:51:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.190 13:51:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.190 13:51:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:08.190 13:51:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.190 13:51:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.190 13:51:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.190 13:51:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:08.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:21:08.190 00:21:08.190 --- 10.0.0.2 ping statistics --- 00:21:08.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.190 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:08.190 13:51:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:21:08.450 00:21:08.450 --- 10.0.0.1 ping statistics --- 00:21:08.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.450 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:21:08.450 13:51:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.450 13:51:10 -- nvmf/common.sh@410 -- # return 0 00:21:08.450 13:51:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:08.450 13:51:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.450 13:51:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:08.450 13:51:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:08.450 13:51:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.450 13:51:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:08.450 13:51:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:08.450 13:51:10 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:08.450 13:51:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:08.450 13:51:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:08.450 13:51:10 -- common/autotest_common.sh@10 -- # set +x 00:21:08.450 13:51:10 -- nvmf/common.sh@469 -- # nvmfpid=1630267 00:21:08.450 13:51:10 -- nvmf/common.sh@470 -- # waitforlisten 1630267 00:21:08.450 13:51:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:08.450 13:51:10 -- common/autotest_common.sh@819 -- # '[' -z 1630267 ']' 00:21:08.450 13:51:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.450 13:51:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:08.450 13:51:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.450 13:51:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:08.450 13:51:10 -- common/autotest_common.sh@10 -- # set +x 00:21:08.450 [2024-07-11 13:51:10.736937] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:08.450 [2024-07-11 13:51:10.736979] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:08.450 [2024-07-11 13:51:10.795954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.450 [2024-07-11 13:51:10.857108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:08.450 [2024-07-11 13:51:10.857241] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.450 [2024-07-11 13:51:10.857252] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.450 [2024-07-11 13:51:10.857259] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.450 [2024-07-11 13:51:10.857382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:08.450 [2024-07-11 13:51:10.857490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:08.450 [2024-07-11 13:51:10.857595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.450 [2024-07-11 13:51:10.857597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:09.387 13:51:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:09.387 13:51:11 -- common/autotest_common.sh@852 -- # return 0 00:21:09.387 13:51:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:09.387 13:51:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:09.387 13:51:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.387 13:51:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.387 13:51:11 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.387 13:51:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:09.387 13:51:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.387 [2024-07-11 13:51:11.566503] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.387 13:51:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:09.387 13:51:11 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:09.387 13:51:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:09.387 13:51:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.387 Malloc0 00:21:09.387 13:51:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:09.387 13:51:11 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:09.387 13:51:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:09.387 13:51:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.387 13:51:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:09.387 13:51:11 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:09.387 13:51:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:09.387 13:51:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.387 13:51:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:09.387 13:51:11 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.387 13:51:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:09.387 13:51:11 -- common/autotest_common.sh@10 -- # set +x 00:21:09.387 [2024-07-11 13:51:11.610804] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.387 13:51:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:09.387 13:51:11 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:09.387 13:51:11 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:09.387 13:51:11 -- nvmf/common.sh@520 -- # config=() 00:21:09.387 13:51:11 -- nvmf/common.sh@520 -- # local subsystem config 00:21:09.387 13:51:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:09.387 13:51:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:09.388 { 00:21:09.388 "params": { 00:21:09.388 "name": "Nvme$subsystem", 00:21:09.388 "trtype": "$TEST_TRANSPORT", 00:21:09.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.388 "adrfam": "ipv4", 00:21:09.388 "trsvcid": "$NVMF_PORT", 00:21:09.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.388 "hdgst": ${hdgst:-false}, 00:21:09.388 "ddgst": ${ddgst:-false} 00:21:09.388 }, 00:21:09.388 "method": "bdev_nvme_attach_controller" 00:21:09.388 } 00:21:09.388 EOF 00:21:09.388 )") 00:21:09.388 13:51:11 -- nvmf/common.sh@542 -- # cat 00:21:09.388 13:51:11 -- nvmf/common.sh@544 -- # jq . 00:21:09.388 13:51:11 -- nvmf/common.sh@545 -- # IFS=, 00:21:09.388 13:51:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:09.388 "params": { 00:21:09.388 "name": "Nvme1", 00:21:09.388 "trtype": "tcp", 00:21:09.388 "traddr": "10.0.0.2", 00:21:09.388 "adrfam": "ipv4", 00:21:09.388 "trsvcid": "4420", 00:21:09.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:09.388 "hdgst": false, 00:21:09.388 "ddgst": false 00:21:09.388 }, 00:21:09.388 "method": "bdev_nvme_attach_controller" 00:21:09.388 }' 00:21:09.388 [2024-07-11 13:51:11.658898] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:09.388 [2024-07-11 13:51:11.658941] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1630518 ] 00:21:09.388 [2024-07-11 13:51:11.714630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:09.388 [2024-07-11 13:51:11.778508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.388 [2024-07-11 13:51:11.778528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.388 [2024-07-11 13:51:11.778530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.646 [2024-07-11 13:51:12.030987] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:09.646 [2024-07-11 13:51:12.031016] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:09.646 I/O targets: 00:21:09.646 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:09.646 00:21:09.646 00:21:09.646 CUnit - A unit testing framework for C - Version 2.1-3 00:21:09.646 http://cunit.sourceforge.net/ 00:21:09.646 00:21:09.646 00:21:09.646 Suite: bdevio tests on: Nvme1n1 00:21:09.646 Test: blockdev write read block ...passed 00:21:09.905 Test: blockdev write zeroes read block ...passed 00:21:09.905 Test: blockdev write zeroes read no split ...passed 00:21:09.905 Test: blockdev write zeroes read split ...passed 00:21:09.905 Test: blockdev write zeroes read split partial ...passed 00:21:09.905 Test: blockdev reset ...[2024-07-11 13:51:12.237461] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:09.906 [2024-07-11 13:51:12.237511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4aa0 (9): Bad file descriptor 00:21:10.165 [2024-07-11 13:51:12.379487] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:10.165 passed 00:21:10.165 Test: blockdev write read 8 blocks ...passed 00:21:10.165 Test: blockdev write read size > 128k ...passed 00:21:10.165 Test: blockdev write read invalid size ...passed 00:21:10.165 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:10.165 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:10.165 Test: blockdev write read max offset ...passed 00:21:10.165 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:10.165 Test: blockdev writev readv 8 blocks ...passed 00:21:10.165 Test: blockdev writev readv 30 x 1block ...passed 00:21:10.165 Test: blockdev writev readv block ...passed 00:21:10.165 Test: blockdev writev readv size > 128k ...passed 00:21:10.165 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:10.165 Test: blockdev comparev and writev ...[2024-07-11 13:51:12.591613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.165 [2024-07-11 13:51:12.591643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.165 [2024-07-11 13:51:12.591656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.165 [2024-07-11 13:51:12.591664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:10.165 [2024-07-11 13:51:12.591940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.165 [2024-07-11 13:51:12.591950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:10.165 [2024-07-11 13:51:12.591961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.165 [2024-07-11 13:51:12.591968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:10.165 [2024-07-11 13:51:12.592265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.165 [2024-07-11 13:51:12.592275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:10.165 [2024-07-11 13:51:12.592286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.165 [2024-07-11 13:51:12.592293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:10.165 [2024-07-11 13:51:12.592554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.165 [2024-07-11 13:51:12.592563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:10.165 [2024-07-11 13:51:12.592575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.165 [2024-07-11 13:51:12.592582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:10.425 passed 00:21:10.425 Test: blockdev nvme passthru rw ...passed 00:21:10.425 Test: blockdev nvme passthru vendor specific ...[2024-07-11 13:51:12.674591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.425 [2024-07-11 13:51:12.674608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:10.425 [2024-07-11 13:51:12.674755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.425 [2024-07-11 13:51:12.674765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:10.425 [2024-07-11 13:51:12.674911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.425 [2024-07-11 13:51:12.674919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:10.425 [2024-07-11 13:51:12.675066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.425 [2024-07-11 13:51:12.675074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:10.425 passed 00:21:10.425 Test: blockdev nvme admin passthru ...passed 00:21:10.425 Test: blockdev copy ...passed 00:21:10.425 00:21:10.425 Run Summary: Type Total Ran Passed Failed Inactive 00:21:10.425 suites 1 1 n/a 0 0 00:21:10.425 tests 23 23 23 0 0 00:21:10.425 asserts 152 152 152 0 n/a 00:21:10.425 00:21:10.425 Elapsed time = 1.397 seconds 00:21:10.684 13:51:12 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.684 13:51:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:10.684 13:51:12 -- common/autotest_common.sh@10 -- # set +x 00:21:10.684 13:51:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:10.684 13:51:13 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:10.684 13:51:13 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:10.684 13:51:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:10.684 13:51:13 -- nvmf/common.sh@116 -- # sync 00:21:10.684 13:51:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:10.684 13:51:13 -- nvmf/common.sh@119 -- # set +e 00:21:10.684 13:51:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:10.684 13:51:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:10.684 rmmod nvme_tcp 00:21:10.684 rmmod nvme_fabrics 00:21:10.684 rmmod nvme_keyring 00:21:10.684 13:51:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:10.684 13:51:13 -- nvmf/common.sh@123 -- # set -e 00:21:10.684 13:51:13 -- nvmf/common.sh@124 -- # return 0 00:21:10.684 13:51:13 -- nvmf/common.sh@477 -- # '[' -n 1630267 ']' 00:21:10.684 13:51:13 -- nvmf/common.sh@478 -- # killprocess 1630267 00:21:10.684 13:51:13 -- common/autotest_common.sh@926 -- # '[' -z 1630267 ']' 00:21:10.684 13:51:13 -- common/autotest_common.sh@930 -- # kill -0 1630267 00:21:10.684 13:51:13 -- common/autotest_common.sh@931 -- # uname 00:21:10.684 13:51:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:10.684 13:51:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1630267 00:21:10.684 13:51:13 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:21:10.684 13:51:13 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:21:10.684 13:51:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1630267' 00:21:10.684 killing process with pid 1630267 00:21:10.684 13:51:13 -- common/autotest_common.sh@945 -- # kill 1630267 00:21:10.684 13:51:13 -- common/autotest_common.sh@950 -- # wait 1630267 00:21:11.253 13:51:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:11.253 13:51:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:11.253 13:51:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:11.253 13:51:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.253 13:51:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:11.253 13:51:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.253 13:51:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.253 13:51:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.162 13:51:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:13.162 00:21:13.162 real 0m10.374s 00:21:13.162 user 0m13.991s 00:21:13.162 sys 0m5.011s 00:21:13.162 13:51:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.162 13:51:15 -- common/autotest_common.sh@10 -- # set +x 00:21:13.162 ************************************ 00:21:13.162 END TEST nvmf_bdevio_no_huge 00:21:13.162 ************************************ 00:21:13.162 13:51:15 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:13.162 13:51:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:13.162 13:51:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:13.162 13:51:15 -- common/autotest_common.sh@10 -- # set +x 00:21:13.162 ************************************ 00:21:13.162 START TEST nvmf_tls 00:21:13.162 ************************************ 00:21:13.162 13:51:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:13.162 * Looking for test storage... 00:21:13.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:13.162 13:51:15 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.162 13:51:15 -- nvmf/common.sh@7 -- # uname -s 00:21:13.162 13:51:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.162 13:51:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.162 13:51:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.162 13:51:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.162 13:51:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.162 13:51:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.162 13:51:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.162 13:51:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.162 13:51:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.162 13:51:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.162 13:51:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:13.162 13:51:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:13.162 13:51:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.162 13:51:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.162 13:51:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.162 13:51:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.162 13:51:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.162 13:51:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.162 13:51:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.162 13:51:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.162 13:51:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.162 13:51:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.162 13:51:15 -- paths/export.sh@5 -- # export PATH 00:21:13.162 13:51:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.162 13:51:15 -- nvmf/common.sh@46 -- # : 0 00:21:13.162 13:51:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:13.162 13:51:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:13.162 13:51:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:13.162 13:51:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.162 13:51:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.162 13:51:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:13.162 13:51:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:13.162 13:51:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:13.421 13:51:15 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.421 13:51:15 -- target/tls.sh@71 -- # nvmftestinit 00:21:13.421 13:51:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:13.421 13:51:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.421 13:51:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:13.421 13:51:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:13.421 13:51:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:13.421 13:51:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.421 13:51:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.421 13:51:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.421 13:51:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:13.421 13:51:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:13.421 13:51:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:13.421 13:51:15 -- common/autotest_common.sh@10 -- # set +x 00:21:18.697 13:51:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:18.697 13:51:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:18.697 13:51:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:18.697 13:51:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:18.697 13:51:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:18.697 13:51:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:18.697 13:51:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:18.697 13:51:20 -- nvmf/common.sh@294 -- # net_devs=() 00:21:18.697 13:51:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:18.697 13:51:20 -- nvmf/common.sh@295 -- # e810=() 00:21:18.697 13:51:20 -- nvmf/common.sh@295 -- # local -ga e810 00:21:18.697 13:51:20 -- nvmf/common.sh@296 -- # x722=() 00:21:18.697 13:51:20 -- nvmf/common.sh@296 -- # local -ga x722 00:21:18.697 13:51:20 -- nvmf/common.sh@297 -- # mlx=() 00:21:18.697 13:51:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:18.697 13:51:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.697 13:51:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.697 13:51:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.697 13:51:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.697 13:51:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.697 13:51:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.697 13:51:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.697 13:51:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.697 13:51:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.697 13:51:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.697 13:51:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.697 13:51:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:18.697 13:51:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:18.697 13:51:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:18.697 13:51:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:18.697 13:51:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:18.697 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:18.697 13:51:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:18.697 13:51:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:18.697 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:18.697 13:51:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:18.697 13:51:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:18.697 13:51:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.697 13:51:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:18.697 13:51:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.697 13:51:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:18.697 Found net devices under 0000:86:00.0: cvl_0_0 00:21:18.697 13:51:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.697 13:51:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:18.697 13:51:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.697 13:51:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:18.697 13:51:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.697 13:51:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:18.697 Found net devices under 0000:86:00.1: cvl_0_1 00:21:18.697 13:51:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.697 13:51:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:18.697 13:51:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:18.697 13:51:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:18.697 13:51:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:18.697 13:51:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.697 13:51:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.697 13:51:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.697 13:51:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:18.697 13:51:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.697 13:51:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.697 13:51:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:18.697 13:51:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.697 13:51:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.697 13:51:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:18.697 13:51:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:18.697 13:51:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.697 13:51:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.697 13:51:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.697 13:51:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.697 13:51:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:18.697 13:51:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.697 13:51:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.697 13:51:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.697 13:51:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:18.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:21:18.697 00:21:18.697 --- 10.0.0.2 ping statistics --- 00:21:18.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.697 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:18.697 13:51:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:21:18.697 00:21:18.697 --- 10.0.0.1 ping statistics --- 00:21:18.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.697 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:21:18.697 13:51:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.697 13:51:21 -- nvmf/common.sh@410 -- # return 0 00:21:18.697 13:51:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:18.697 13:51:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.697 13:51:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:18.697 13:51:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:18.697 13:51:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.697 13:51:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:18.697 13:51:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:18.697 13:51:21 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:18.697 13:51:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:18.697 13:51:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:18.697 13:51:21 -- common/autotest_common.sh@10 -- # set +x 00:21:18.697 13:51:21 -- nvmf/common.sh@469 -- # nvmfpid=1634257 00:21:18.697 13:51:21 -- nvmf/common.sh@470 -- # waitforlisten 1634257 00:21:18.697 13:51:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:18.697 13:51:21 -- common/autotest_common.sh@819 -- # '[' -z 1634257 ']' 00:21:18.697 13:51:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.697 13:51:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:18.697 13:51:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.697 13:51:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:18.697 13:51:21 -- common/autotest_common.sh@10 -- # set +x 00:21:18.697 [2024-07-11 13:51:21.117334] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:18.697 [2024-07-11 13:51:21.117377] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.697 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.957 [2024-07-11 13:51:21.175078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.957 [2024-07-11 13:51:21.214634] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:18.957 [2024-07-11 13:51:21.214755] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.957 [2024-07-11 13:51:21.214764] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.957 [2024-07-11 13:51:21.214771] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.957 [2024-07-11 13:51:21.214786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.957 13:51:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:18.957 13:51:21 -- common/autotest_common.sh@852 -- # return 0 00:21:18.957 13:51:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:18.957 13:51:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:18.957 13:51:21 -- common/autotest_common.sh@10 -- # set +x 00:21:18.957 13:51:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.957 13:51:21 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:21:18.957 13:51:21 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:19.216 true 00:21:19.216 13:51:21 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:19.216 13:51:21 -- target/tls.sh@82 -- # jq -r .tls_version 00:21:19.216 13:51:21 -- target/tls.sh@82 -- # version=0 00:21:19.216 13:51:21 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:21:19.216 13:51:21 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:19.475 13:51:21 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:19.475 13:51:21 -- target/tls.sh@90 -- # jq -r .tls_version 00:21:19.734 13:51:21 -- target/tls.sh@90 -- # version=13 00:21:19.734 13:51:21 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:21:19.734 13:51:21 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:19.734 13:51:22 -- target/tls.sh@98 -- # jq -r .tls_version 00:21:19.734 13:51:22 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:19.994 13:51:22 -- target/tls.sh@98 -- # version=7 00:21:19.994 13:51:22 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:21:19.994 13:51:22 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:19.994 13:51:22 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:20.253 13:51:22 -- target/tls.sh@105 -- # ktls=false 00:21:20.253 13:51:22 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:21:20.253 13:51:22 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:20.253 13:51:22 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:20.253 13:51:22 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:20.512 13:51:22 -- target/tls.sh@113 -- # ktls=true 00:21:20.512 13:51:22 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:21:20.512 13:51:22 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:20.797 13:51:22 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:20.797 13:51:22 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:21:20.797 13:51:23 -- target/tls.sh@121 -- # ktls=false 00:21:20.797 13:51:23 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:21:20.797 13:51:23 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:21:20.797 13:51:23 -- target/tls.sh@49 -- # local key hash crc 00:21:20.797 13:51:23 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:21:20.797 13:51:23 -- target/tls.sh@51 -- # hash=01 00:21:20.797 13:51:23 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:21:20.797 13:51:23 -- target/tls.sh@52 -- # gzip -1 -c 00:21:20.797 13:51:23 -- target/tls.sh@52 -- # tail -c8 00:21:20.797 13:51:23 -- target/tls.sh@52 -- # head -c 4 00:21:20.797 13:51:23 -- target/tls.sh@52 -- # crc='p$H�' 00:21:20.797 13:51:23 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:20.797 13:51:23 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:21:20.797 13:51:23 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:20.797 13:51:23 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:20.797 13:51:23 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:21:20.797 13:51:23 -- target/tls.sh@49 -- # local key hash crc 00:21:20.797 13:51:23 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:21:20.797 13:51:23 -- target/tls.sh@51 -- # hash=01 00:21:20.797 13:51:23 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:21:20.797 13:51:23 -- target/tls.sh@52 -- # gzip -1 -c 00:21:20.797 13:51:23 -- target/tls.sh@52 -- # tail -c8 00:21:20.797 13:51:23 -- target/tls.sh@52 -- # head -c 4 00:21:20.797 13:51:23 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:21:20.797 13:51:23 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:20.797 13:51:23 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:21:20.797 13:51:23 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:20.797 13:51:23 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:20.797 13:51:23 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:20.797 13:51:23 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:20.797 13:51:23 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:20.797 13:51:23 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:20.797 13:51:23 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:20.797 13:51:23 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:20.797 13:51:23 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:21.056 13:51:23 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:21.313 13:51:23 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:21.313 13:51:23 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:21.313 13:51:23 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:21.313 [2024-07-11 13:51:23.711998] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.313 13:51:23 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:21.572 13:51:23 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:21.572 [2024-07-11 13:51:24.016777] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.572 [2024-07-11 13:51:24.016976] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.831 13:51:24 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:21.831 malloc0 00:21:21.831 13:51:24 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:22.090 13:51:24 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:22.090 13:51:24 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:22.090 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.302 Initializing NVMe Controllers 00:21:34.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:34.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:34.302 Initialization complete. Launching workers. 00:21:34.302 ======================================================== 00:21:34.302 Latency(us) 00:21:34.302 Device Information : IOPS MiB/s Average min max 00:21:34.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17156.37 67.02 3730.76 728.52 5002.98 00:21:34.302 ======================================================== 00:21:34.302 Total : 17156.37 67.02 3730.76 728.52 5002.98 00:21:34.302 00:21:34.302 13:51:34 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:34.302 13:51:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:34.302 13:51:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:34.302 13:51:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:34.302 13:51:34 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:34.302 13:51:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.302 13:51:34 -- target/tls.sh@28 -- # bdevperf_pid=1636458 00:21:34.302 13:51:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:34.302 13:51:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:34.302 13:51:34 -- target/tls.sh@31 -- # waitforlisten 1636458 /var/tmp/bdevperf.sock 00:21:34.302 13:51:34 -- common/autotest_common.sh@819 -- # '[' -z 1636458 ']' 00:21:34.302 13:51:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.302 13:51:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:34.302 13:51:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.302 13:51:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:34.302 13:51:34 -- common/autotest_common.sh@10 -- # set +x 00:21:34.302 [2024-07-11 13:51:34.660185] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:34.302 [2024-07-11 13:51:34.660232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636458 ] 00:21:34.302 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.302 [2024-07-11 13:51:34.710317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.302 [2024-07-11 13:51:34.749019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.302 13:51:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:34.302 13:51:35 -- common/autotest_common.sh@852 -- # return 0 00:21:34.302 13:51:35 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:34.302 [2024-07-11 13:51:35.610923] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.302 TLSTESTn1 00:21:34.302 13:51:35 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:34.302 Running I/O for 10 seconds... 00:21:44.282 00:21:44.282 Latency(us) 00:21:44.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.282 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:44.282 Verification LBA range: start 0x0 length 0x2000 00:21:44.282 TLSTESTn1 : 10.01 4938.95 19.29 0.00 0.00 25893.22 3846.68 44450.50 00:21:44.282 =================================================================================================================== 00:21:44.282 Total : 4938.95 19.29 0.00 0.00 25893.22 3846.68 44450.50 00:21:44.282 0 00:21:44.282 13:51:45 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.282 13:51:45 -- target/tls.sh@45 -- # killprocess 1636458 00:21:44.282 13:51:45 -- common/autotest_common.sh@926 -- # '[' -z 1636458 ']' 00:21:44.282 13:51:45 -- common/autotest_common.sh@930 -- # kill -0 1636458 00:21:44.282 13:51:45 -- common/autotest_common.sh@931 -- # uname 00:21:44.282 13:51:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:44.282 13:51:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1636458 00:21:44.282 13:51:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:44.282 13:51:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:44.282 13:51:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1636458' 00:21:44.282 killing process with pid 1636458 00:21:44.282 13:51:45 -- common/autotest_common.sh@945 -- # kill 1636458 00:21:44.282 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.282 00:21:44.282 Latency(us) 00:21:44.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.282 =================================================================================================================== 00:21:44.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.282 13:51:45 -- common/autotest_common.sh@950 -- # wait 1636458 00:21:44.282 13:51:46 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:44.282 13:51:46 -- common/autotest_common.sh@640 -- # local es=0 00:21:44.282 13:51:46 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:44.282 13:51:46 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:44.282 13:51:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:44.282 13:51:46 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:44.282 13:51:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:44.282 13:51:46 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:44.282 13:51:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:44.282 13:51:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:44.282 13:51:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:44.282 13:51:46 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:21:44.282 13:51:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.282 13:51:46 -- target/tls.sh@28 -- # bdevperf_pid=1638328 00:21:44.282 13:51:46 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:44.282 13:51:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:44.282 13:51:46 -- target/tls.sh@31 -- # waitforlisten 1638328 /var/tmp/bdevperf.sock 00:21:44.282 13:51:46 -- common/autotest_common.sh@819 -- # '[' -z 1638328 ']' 00:21:44.282 13:51:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.282 13:51:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:44.282 13:51:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.282 13:51:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:44.282 13:51:46 -- common/autotest_common.sh@10 -- # set +x 00:21:44.282 [2024-07-11 13:51:46.110875] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:44.282 [2024-07-11 13:51:46.110923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638328 ] 00:21:44.282 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.282 [2024-07-11 13:51:46.159417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.282 [2024-07-11 13:51:46.196001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.541 13:51:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:44.541 13:51:46 -- common/autotest_common.sh@852 -- # return 0 00:21:44.541 13:51:46 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:21:44.799 [2024-07-11 13:51:47.072577] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.799 [2024-07-11 13:51:47.081997] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:44.799 [2024-07-11 13:51:47.082881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0a530 (107): Transport endpoint is not connected 00:21:44.799 [2024-07-11 13:51:47.083875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0a530 (9): Bad file descriptor 00:21:44.799 [2024-07-11 13:51:47.084875] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:44.799 [2024-07-11 13:51:47.084884] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:44.799 [2024-07-11 13:51:47.084894] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:44.799 request: 00:21:44.799 { 00:21:44.799 "name": "TLSTEST", 00:21:44.799 "trtype": "tcp", 00:21:44.799 "traddr": "10.0.0.2", 00:21:44.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.799 "adrfam": "ipv4", 00:21:44.799 "trsvcid": "4420", 00:21:44.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.799 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:21:44.799 "method": "bdev_nvme_attach_controller", 00:21:44.799 "req_id": 1 00:21:44.799 } 00:21:44.799 Got JSON-RPC error response 00:21:44.799 response: 00:21:44.799 { 00:21:44.799 "code": -32602, 00:21:44.799 "message": "Invalid parameters" 00:21:44.799 } 00:21:44.799 13:51:47 -- target/tls.sh@36 -- # killprocess 1638328 00:21:44.799 13:51:47 -- common/autotest_common.sh@926 -- # '[' -z 1638328 ']' 00:21:44.799 13:51:47 -- common/autotest_common.sh@930 -- # kill -0 1638328 00:21:44.799 13:51:47 -- common/autotest_common.sh@931 -- # uname 00:21:44.799 13:51:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:44.799 13:51:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1638328 00:21:44.799 13:51:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:44.799 13:51:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:44.799 13:51:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1638328' 00:21:44.799 killing process with pid 1638328 00:21:44.799 13:51:47 -- common/autotest_common.sh@945 -- # kill 1638328 00:21:44.799 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.799 00:21:44.799 Latency(us) 00:21:44.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.799 =================================================================================================================== 00:21:44.799 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:44.799 13:51:47 -- common/autotest_common.sh@950 -- # wait 1638328 00:21:45.058 13:51:47 -- target/tls.sh@37 -- # return 1 00:21:45.058 13:51:47 -- common/autotest_common.sh@643 -- # es=1 00:21:45.058 13:51:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:45.058 13:51:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:45.058 13:51:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:45.058 13:51:47 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:45.058 13:51:47 -- common/autotest_common.sh@640 -- # local es=0 00:21:45.058 13:51:47 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:45.058 13:51:47 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:45.058 13:51:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:45.058 13:51:47 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:45.058 13:51:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:45.058 13:51:47 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:45.058 13:51:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:45.058 13:51:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:45.058 13:51:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:45.058 13:51:47 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:45.058 13:51:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:45.058 13:51:47 -- target/tls.sh@28 -- # bdevperf_pid=1638573 00:21:45.058 13:51:47 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:45.058 13:51:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:45.058 13:51:47 -- target/tls.sh@31 -- # waitforlisten 1638573 /var/tmp/bdevperf.sock 00:21:45.058 13:51:47 -- common/autotest_common.sh@819 -- # '[' -z 1638573 ']' 00:21:45.058 13:51:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.058 13:51:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:45.058 13:51:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.058 13:51:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:45.058 13:51:47 -- common/autotest_common.sh@10 -- # set +x 00:21:45.058 [2024-07-11 13:51:47.353262] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:45.058 [2024-07-11 13:51:47.353306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638573 ] 00:21:45.058 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.058 [2024-07-11 13:51:47.402760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.058 [2024-07-11 13:51:47.441234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.995 13:51:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:45.996 13:51:48 -- common/autotest_common.sh@852 -- # return 0 00:21:45.996 13:51:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:45.996 [2024-07-11 13:51:48.301756] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.996 [2024-07-11 13:51:48.306297] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:45.996 [2024-07-11 13:51:48.306318] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:45.996 [2024-07-11 13:51:48.306342] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:45.996 [2024-07-11 13:51:48.307011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x891530 (107): Transport endpoint is not connected 00:21:45.996 [2024-07-11 13:51:48.308004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x891530 (9): Bad file descriptor 00:21:45.996 [2024-07-11 13:51:48.309005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:45.996 [2024-07-11 13:51:48.309014] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:45.996 [2024-07-11 13:51:48.309020] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:45.996 request: 00:21:45.996 { 00:21:45.996 "name": "TLSTEST", 00:21:45.996 "trtype": "tcp", 00:21:45.996 "traddr": "10.0.0.2", 00:21:45.996 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:45.996 "adrfam": "ipv4", 00:21:45.996 "trsvcid": "4420", 00:21:45.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.996 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:21:45.996 "method": "bdev_nvme_attach_controller", 00:21:45.996 "req_id": 1 00:21:45.996 } 00:21:45.996 Got JSON-RPC error response 00:21:45.996 response: 00:21:45.996 { 00:21:45.996 "code": -32602, 00:21:45.996 "message": "Invalid parameters" 00:21:45.996 } 00:21:45.996 13:51:48 -- target/tls.sh@36 -- # killprocess 1638573 00:21:45.996 13:51:48 -- common/autotest_common.sh@926 -- # '[' -z 1638573 ']' 00:21:45.996 13:51:48 -- common/autotest_common.sh@930 -- # kill -0 1638573 00:21:45.996 13:51:48 -- common/autotest_common.sh@931 -- # uname 00:21:45.996 13:51:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:45.996 13:51:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1638573 00:21:45.996 13:51:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:45.996 13:51:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:45.996 13:51:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1638573' 00:21:45.996 killing process with pid 1638573 00:21:45.996 13:51:48 -- common/autotest_common.sh@945 -- # kill 1638573 00:21:45.996 Received shutdown signal, test time was about 10.000000 seconds 00:21:45.996 00:21:45.996 Latency(us) 00:21:45.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.996 =================================================================================================================== 00:21:45.996 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:45.996 13:51:48 -- common/autotest_common.sh@950 -- # wait 1638573 00:21:46.256 13:51:48 -- target/tls.sh@37 -- # return 1 00:21:46.256 13:51:48 -- common/autotest_common.sh@643 -- # es=1 00:21:46.256 13:51:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:46.256 13:51:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:46.256 13:51:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:46.256 13:51:48 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:46.256 13:51:48 -- common/autotest_common.sh@640 -- # local es=0 00:21:46.256 13:51:48 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:46.256 13:51:48 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:46.256 13:51:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:46.256 13:51:48 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:46.256 13:51:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:46.256 13:51:48 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:46.256 13:51:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:46.256 13:51:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:46.256 13:51:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:46.256 13:51:48 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:46.256 13:51:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.256 13:51:48 -- target/tls.sh@28 -- # bdevperf_pid=1638809 00:21:46.256 13:51:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.256 13:51:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.256 13:51:48 -- target/tls.sh@31 -- # waitforlisten 1638809 /var/tmp/bdevperf.sock 00:21:46.256 13:51:48 -- common/autotest_common.sh@819 -- # '[' -z 1638809 ']' 00:21:46.256 13:51:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.256 13:51:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:46.256 13:51:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.256 13:51:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:46.256 13:51:48 -- common/autotest_common.sh@10 -- # set +x 00:21:46.256 [2024-07-11 13:51:48.591936] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:46.256 [2024-07-11 13:51:48.591984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638809 ] 00:21:46.256 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.256 [2024-07-11 13:51:48.643007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.256 [2024-07-11 13:51:48.679617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.195 13:51:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:47.195 13:51:49 -- common/autotest_common.sh@852 -- # return 0 00:21:47.195 13:51:49 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:47.195 [2024-07-11 13:51:49.524734] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.195 [2024-07-11 13:51:49.529437] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:47.195 [2024-07-11 13:51:49.529458] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:47.195 [2024-07-11 13:51:49.529485] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:47.195 [2024-07-11 13:51:49.530148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf25530 (107): Transport endpoint is not connected 00:21:47.195 [2024-07-11 13:51:49.531139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf25530 (9): Bad file descriptor 00:21:47.195 [2024-07-11 13:51:49.532141] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:47.195 [2024-07-11 13:51:49.532149] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:47.195 [2024-07-11 13:51:49.532156] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:47.195 request: 00:21:47.195 { 00:21:47.195 "name": "TLSTEST", 00:21:47.195 "trtype": "tcp", 00:21:47.195 "traddr": "10.0.0.2", 00:21:47.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.195 "adrfam": "ipv4", 00:21:47.195 "trsvcid": "4420", 00:21:47.195 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:47.195 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:21:47.195 "method": "bdev_nvme_attach_controller", 00:21:47.195 "req_id": 1 00:21:47.195 } 00:21:47.195 Got JSON-RPC error response 00:21:47.195 response: 00:21:47.195 { 00:21:47.195 "code": -32602, 00:21:47.195 "message": "Invalid parameters" 00:21:47.195 } 00:21:47.195 13:51:49 -- target/tls.sh@36 -- # killprocess 1638809 00:21:47.195 13:51:49 -- common/autotest_common.sh@926 -- # '[' -z 1638809 ']' 00:21:47.195 13:51:49 -- common/autotest_common.sh@930 -- # kill -0 1638809 00:21:47.195 13:51:49 -- common/autotest_common.sh@931 -- # uname 00:21:47.195 13:51:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:47.195 13:51:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1638809 00:21:47.195 13:51:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:47.195 13:51:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:47.195 13:51:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1638809' 00:21:47.195 killing process with pid 1638809 00:21:47.195 13:51:49 -- common/autotest_common.sh@945 -- # kill 1638809 00:21:47.195 Received shutdown signal, test time was about 10.000000 seconds 00:21:47.195 00:21:47.195 Latency(us) 00:21:47.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.195 =================================================================================================================== 00:21:47.195 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:47.195 13:51:49 -- common/autotest_common.sh@950 -- # wait 1638809 00:21:47.454 13:51:49 -- target/tls.sh@37 -- # return 1 00:21:47.454 13:51:49 -- common/autotest_common.sh@643 -- # es=1 00:21:47.454 13:51:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:47.454 13:51:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:47.454 13:51:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:47.454 13:51:49 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:47.454 13:51:49 -- common/autotest_common.sh@640 -- # local es=0 00:21:47.454 13:51:49 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:47.454 13:51:49 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:47.454 13:51:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:47.454 13:51:49 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:47.454 13:51:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:47.454 13:51:49 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:47.454 13:51:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:47.454 13:51:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:47.454 13:51:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:47.454 13:51:49 -- target/tls.sh@23 -- # psk= 00:21:47.454 13:51:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.454 13:51:49 -- target/tls.sh@28 -- # bdevperf_pid=1639051 00:21:47.454 13:51:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:47.454 13:51:49 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:47.454 13:51:49 -- target/tls.sh@31 -- # waitforlisten 1639051 /var/tmp/bdevperf.sock 00:21:47.454 13:51:49 -- common/autotest_common.sh@819 -- # '[' -z 1639051 ']' 00:21:47.454 13:51:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.454 13:51:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:47.454 13:51:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.454 13:51:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:47.454 13:51:49 -- common/autotest_common.sh@10 -- # set +x 00:21:47.454 [2024-07-11 13:51:49.806137] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:47.454 [2024-07-11 13:51:49.806196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639051 ] 00:21:47.454 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.454 [2024-07-11 13:51:49.855744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.454 [2024-07-11 13:51:49.889662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.390 13:51:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:48.390 13:51:50 -- common/autotest_common.sh@852 -- # return 0 00:21:48.390 13:51:50 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:48.390 [2024-07-11 13:51:50.761369] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:48.390 [2024-07-11 13:51:50.762728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155dc00 (9): Bad file descriptor 00:21:48.390 [2024-07-11 13:51:50.763727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.390 [2024-07-11 13:51:50.763738] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:48.390 [2024-07-11 13:51:50.763745] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.390 request: 00:21:48.390 { 00:21:48.390 "name": "TLSTEST", 00:21:48.390 "trtype": "tcp", 00:21:48.390 "traddr": "10.0.0.2", 00:21:48.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.390 "adrfam": "ipv4", 00:21:48.390 "trsvcid": "4420", 00:21:48.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.390 "method": "bdev_nvme_attach_controller", 00:21:48.390 "req_id": 1 00:21:48.390 } 00:21:48.390 Got JSON-RPC error response 00:21:48.390 response: 00:21:48.390 { 00:21:48.390 "code": -32602, 00:21:48.390 "message": "Invalid parameters" 00:21:48.390 } 00:21:48.390 13:51:50 -- target/tls.sh@36 -- # killprocess 1639051 00:21:48.390 13:51:50 -- common/autotest_common.sh@926 -- # '[' -z 1639051 ']' 00:21:48.390 13:51:50 -- common/autotest_common.sh@930 -- # kill -0 1639051 00:21:48.390 13:51:50 -- common/autotest_common.sh@931 -- # uname 00:21:48.390 13:51:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:48.390 13:51:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1639051 00:21:48.390 13:51:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:48.390 13:51:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:48.390 13:51:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1639051' 00:21:48.390 killing process with pid 1639051 00:21:48.390 13:51:50 -- common/autotest_common.sh@945 -- # kill 1639051 00:21:48.390 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.390 00:21:48.390 Latency(us) 00:21:48.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.390 =================================================================================================================== 00:21:48.390 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:48.390 13:51:50 -- common/autotest_common.sh@950 -- # wait 1639051 00:21:48.649 13:51:50 -- target/tls.sh@37 -- # return 1 00:21:48.649 13:51:50 -- common/autotest_common.sh@643 -- # es=1 00:21:48.649 13:51:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:48.649 13:51:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:48.649 13:51:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:48.649 13:51:50 -- target/tls.sh@167 -- # killprocess 1634257 00:21:48.649 13:51:50 -- common/autotest_common.sh@926 -- # '[' -z 1634257 ']' 00:21:48.649 13:51:50 -- common/autotest_common.sh@930 -- # kill -0 1634257 00:21:48.649 13:51:50 -- common/autotest_common.sh@931 -- # uname 00:21:48.649 13:51:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:48.649 13:51:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1634257 00:21:48.649 13:51:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:48.649 13:51:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:48.649 13:51:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1634257' 00:21:48.649 killing process with pid 1634257 00:21:48.649 13:51:51 -- common/autotest_common.sh@945 -- # kill 1634257 00:21:48.649 13:51:51 -- common/autotest_common.sh@950 -- # wait 1634257 00:21:48.908 13:51:51 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:21:48.908 13:51:51 -- target/tls.sh@49 -- # local key hash crc 00:21:48.908 13:51:51 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:48.908 13:51:51 -- target/tls.sh@51 -- # hash=02 00:21:48.908 13:51:51 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:21:48.908 13:51:51 -- target/tls.sh@52 -- # gzip -1 -c 00:21:48.908 13:51:51 -- target/tls.sh@52 -- # head -c 4 00:21:48.908 13:51:51 -- target/tls.sh@52 -- # tail -c8 00:21:48.908 13:51:51 -- target/tls.sh@52 -- # crc='�e�'\''' 00:21:48.908 13:51:51 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:48.908 13:51:51 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:21:48.908 13:51:51 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:48.908 13:51:51 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:48.908 13:51:51 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:48.908 13:51:51 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:48.908 13:51:51 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:48.908 13:51:51 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:21:48.908 13:51:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:48.908 13:51:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:48.908 13:51:51 -- common/autotest_common.sh@10 -- # set +x 00:21:48.908 13:51:51 -- nvmf/common.sh@469 -- # nvmfpid=1639307 00:21:48.908 13:51:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:48.908 13:51:51 -- nvmf/common.sh@470 -- # waitforlisten 1639307 00:21:48.908 13:51:51 -- common/autotest_common.sh@819 -- # '[' -z 1639307 ']' 00:21:48.908 13:51:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.908 13:51:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:48.908 13:51:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.908 13:51:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:48.908 13:51:51 -- common/autotest_common.sh@10 -- # set +x 00:21:48.908 [2024-07-11 13:51:51.295366] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:48.908 [2024-07-11 13:51:51.295414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.908 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.908 [2024-07-11 13:51:51.353082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.167 [2024-07-11 13:51:51.390715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:49.167 [2024-07-11 13:51:51.390840] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.167 [2024-07-11 13:51:51.390848] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.167 [2024-07-11 13:51:51.390855] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.167 [2024-07-11 13:51:51.390872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.736 13:51:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:49.736 13:51:52 -- common/autotest_common.sh@852 -- # return 0 00:21:49.736 13:51:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:49.736 13:51:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:49.736 13:51:52 -- common/autotest_common.sh@10 -- # set +x 00:21:49.736 13:51:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.736 13:51:52 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:49.736 13:51:52 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:49.736 13:51:52 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:49.995 [2024-07-11 13:51:52.272764] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.995 13:51:52 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:50.253 13:51:52 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:50.253 [2024-07-11 13:51:52.605637] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:50.253 [2024-07-11 13:51:52.605829] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.253 13:51:52 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:50.512 malloc0 00:21:50.512 13:51:52 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:50.512 13:51:52 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:50.773 13:51:53 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:50.773 13:51:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:50.773 13:51:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:50.773 13:51:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:50.773 13:51:53 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:21:50.773 13:51:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.773 13:51:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:50.773 13:51:53 -- target/tls.sh@28 -- # bdevperf_pid=1639572 00:21:50.773 13:51:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.773 13:51:53 -- target/tls.sh@31 -- # waitforlisten 1639572 /var/tmp/bdevperf.sock 00:21:50.773 13:51:53 -- common/autotest_common.sh@819 -- # '[' -z 1639572 ']' 00:21:50.773 13:51:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.773 13:51:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:50.773 13:51:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.773 13:51:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:50.773 13:51:53 -- common/autotest_common.sh@10 -- # set +x 00:21:50.773 [2024-07-11 13:51:53.140252] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:50.773 [2024-07-11 13:51:53.140297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639572 ] 00:21:50.773 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.773 [2024-07-11 13:51:53.190392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.068 [2024-07-11 13:51:53.229280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.636 13:51:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:51.636 13:51:53 -- common/autotest_common.sh@852 -- # return 0 00:21:51.636 13:51:53 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:51.895 [2024-07-11 13:51:54.094594] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.895 TLSTESTn1 00:21:51.895 13:51:54 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:51.895 Running I/O for 10 seconds... 00:22:01.870 00:22:01.870 Latency(us) 00:22:01.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.870 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:01.870 Verification LBA range: start 0x0 length 0x2000 00:22:01.870 TLSTESTn1 : 10.02 4973.30 19.43 0.00 0.00 25709.58 4929.45 44678.46 00:22:01.870 =================================================================================================================== 00:22:01.870 Total : 4973.30 19.43 0.00 0.00 25709.58 4929.45 44678.46 00:22:01.870 0 00:22:02.130 13:52:04 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:02.130 13:52:04 -- target/tls.sh@45 -- # killprocess 1639572 00:22:02.130 13:52:04 -- common/autotest_common.sh@926 -- # '[' -z 1639572 ']' 00:22:02.130 13:52:04 -- common/autotest_common.sh@930 -- # kill -0 1639572 00:22:02.130 13:52:04 -- common/autotest_common.sh@931 -- # uname 00:22:02.130 13:52:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:02.130 13:52:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1639572 00:22:02.130 13:52:04 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:02.130 13:52:04 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:02.130 13:52:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1639572' 00:22:02.130 killing process with pid 1639572 00:22:02.130 13:52:04 -- common/autotest_common.sh@945 -- # kill 1639572 00:22:02.130 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.130 00:22:02.130 Latency(us) 00:22:02.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.130 =================================================================================================================== 00:22:02.130 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.130 13:52:04 -- common/autotest_common.sh@950 -- # wait 1639572 00:22:02.130 13:52:04 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:02.130 13:52:04 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:02.130 13:52:04 -- common/autotest_common.sh@640 -- # local es=0 00:22:02.130 13:52:04 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:02.130 13:52:04 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:22:02.130 13:52:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:02.130 13:52:04 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:22:02.130 13:52:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:02.130 13:52:04 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:02.130 13:52:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.130 13:52:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.130 13:52:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.130 13:52:04 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:22:02.130 13:52:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.130 13:52:04 -- target/tls.sh@28 -- # bdevperf_pid=1641464 00:22:02.130 13:52:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.130 13:52:04 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.130 13:52:04 -- target/tls.sh@31 -- # waitforlisten 1641464 /var/tmp/bdevperf.sock 00:22:02.130 13:52:04 -- common/autotest_common.sh@819 -- # '[' -z 1641464 ']' 00:22:02.130 13:52:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.130 13:52:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:02.130 13:52:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.130 13:52:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:02.130 13:52:04 -- common/autotest_common.sh@10 -- # set +x 00:22:02.390 [2024-07-11 13:52:04.600599] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:02.390 [2024-07-11 13:52:04.600645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641464 ] 00:22:02.390 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.390 [2024-07-11 13:52:04.650515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.390 [2024-07-11 13:52:04.689806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.958 13:52:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:02.958 13:52:05 -- common/autotest_common.sh@852 -- # return 0 00:22:02.958 13:52:05 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:03.217 [2024-07-11 13:52:05.560075] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.217 [2024-07-11 13:52:05.560106] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:03.217 request: 00:22:03.217 { 00:22:03.217 "name": "TLSTEST", 00:22:03.217 "trtype": "tcp", 00:22:03.217 "traddr": "10.0.0.2", 00:22:03.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.217 "adrfam": "ipv4", 00:22:03.217 "trsvcid": "4420", 00:22:03.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.217 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:03.217 "method": "bdev_nvme_attach_controller", 00:22:03.217 "req_id": 1 00:22:03.217 } 00:22:03.217 Got JSON-RPC error response 00:22:03.217 response: 00:22:03.217 { 00:22:03.217 "code": -22, 00:22:03.217 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:03.217 } 00:22:03.217 13:52:05 -- target/tls.sh@36 -- # killprocess 1641464 00:22:03.217 13:52:05 -- common/autotest_common.sh@926 -- # '[' -z 1641464 ']' 00:22:03.217 13:52:05 -- common/autotest_common.sh@930 -- # kill -0 1641464 00:22:03.217 13:52:05 -- common/autotest_common.sh@931 -- # uname 00:22:03.217 13:52:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:03.217 13:52:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1641464 00:22:03.217 13:52:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:03.217 13:52:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:03.217 13:52:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1641464' 00:22:03.217 killing process with pid 1641464 00:22:03.217 13:52:05 -- common/autotest_common.sh@945 -- # kill 1641464 00:22:03.217 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.217 00:22:03.217 Latency(us) 00:22:03.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.217 =================================================================================================================== 00:22:03.217 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:03.217 13:52:05 -- common/autotest_common.sh@950 -- # wait 1641464 00:22:03.475 13:52:05 -- target/tls.sh@37 -- # return 1 00:22:03.475 13:52:05 -- common/autotest_common.sh@643 -- # es=1 00:22:03.475 13:52:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:03.475 13:52:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:03.475 13:52:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:03.475 13:52:05 -- target/tls.sh@183 -- # killprocess 1639307 00:22:03.475 13:52:05 -- common/autotest_common.sh@926 -- # '[' -z 1639307 ']' 00:22:03.475 13:52:05 -- common/autotest_common.sh@930 -- # kill -0 1639307 00:22:03.475 13:52:05 -- common/autotest_common.sh@931 -- # uname 00:22:03.475 13:52:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:03.475 13:52:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1639307 00:22:03.475 13:52:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:03.475 13:52:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:03.475 13:52:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1639307' 00:22:03.475 killing process with pid 1639307 00:22:03.475 13:52:05 -- common/autotest_common.sh@945 -- # kill 1639307 00:22:03.475 13:52:05 -- common/autotest_common.sh@950 -- # wait 1639307 00:22:03.733 13:52:06 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:03.733 13:52:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:03.733 13:52:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:03.733 13:52:06 -- common/autotest_common.sh@10 -- # set +x 00:22:03.733 13:52:06 -- nvmf/common.sh@469 -- # nvmfpid=1641730 00:22:03.733 13:52:06 -- nvmf/common.sh@470 -- # waitforlisten 1641730 00:22:03.733 13:52:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:03.733 13:52:06 -- common/autotest_common.sh@819 -- # '[' -z 1641730 ']' 00:22:03.733 13:52:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.733 13:52:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:03.733 13:52:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.733 13:52:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:03.733 13:52:06 -- common/autotest_common.sh@10 -- # set +x 00:22:03.733 [2024-07-11 13:52:06.075970] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:03.733 [2024-07-11 13:52:06.076015] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.733 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.733 [2024-07-11 13:52:06.132372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.733 [2024-07-11 13:52:06.170212] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:03.733 [2024-07-11 13:52:06.170338] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.733 [2024-07-11 13:52:06.170346] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.733 [2024-07-11 13:52:06.170352] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.733 [2024-07-11 13:52:06.170368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.670 13:52:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:04.670 13:52:06 -- common/autotest_common.sh@852 -- # return 0 00:22:04.670 13:52:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:04.670 13:52:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:04.670 13:52:06 -- common/autotest_common.sh@10 -- # set +x 00:22:04.670 13:52:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.670 13:52:06 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:04.670 13:52:06 -- common/autotest_common.sh@640 -- # local es=0 00:22:04.670 13:52:06 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:04.670 13:52:06 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:22:04.670 13:52:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:04.670 13:52:06 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:22:04.670 13:52:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:04.670 13:52:06 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:04.670 13:52:06 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:04.670 13:52:06 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:04.670 [2024-07-11 13:52:07.056101] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.670 13:52:07 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:04.928 13:52:07 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:05.188 [2024-07-11 13:52:07.384961] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.188 [2024-07-11 13:52:07.385169] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.188 13:52:07 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:05.188 malloc0 00:22:05.188 13:52:07 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:05.448 13:52:07 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:05.448 [2024-07-11 13:52:07.878575] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:05.448 [2024-07-11 13:52:07.878601] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:05.448 [2024-07-11 13:52:07.878616] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:22:05.448 request: 00:22:05.448 { 00:22:05.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.448 "host": "nqn.2016-06.io.spdk:host1", 00:22:05.448 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:05.448 "method": "nvmf_subsystem_add_host", 00:22:05.448 "req_id": 1 00:22:05.448 } 00:22:05.448 Got JSON-RPC error response 00:22:05.448 response: 00:22:05.448 { 00:22:05.448 "code": -32603, 00:22:05.448 "message": "Internal error" 00:22:05.448 } 00:22:05.448 13:52:07 -- common/autotest_common.sh@643 -- # es=1 00:22:05.448 13:52:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:05.448 13:52:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:05.448 13:52:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:05.448 13:52:07 -- target/tls.sh@189 -- # killprocess 1641730 00:22:05.448 13:52:07 -- common/autotest_common.sh@926 -- # '[' -z 1641730 ']' 00:22:05.448 13:52:07 -- common/autotest_common.sh@930 -- # kill -0 1641730 00:22:05.448 13:52:07 -- common/autotest_common.sh@931 -- # uname 00:22:05.448 13:52:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:05.448 13:52:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1641730 00:22:05.707 13:52:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:05.707 13:52:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:05.707 13:52:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1641730' 00:22:05.707 killing process with pid 1641730 00:22:05.707 13:52:07 -- common/autotest_common.sh@945 -- # kill 1641730 00:22:05.707 13:52:07 -- common/autotest_common.sh@950 -- # wait 1641730 00:22:05.707 13:52:08 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:05.707 13:52:08 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:22:05.707 13:52:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:05.707 13:52:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:05.707 13:52:08 -- common/autotest_common.sh@10 -- # set +x 00:22:05.707 13:52:08 -- nvmf/common.sh@469 -- # nvmfpid=1642186 00:22:05.707 13:52:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:05.707 13:52:08 -- nvmf/common.sh@470 -- # waitforlisten 1642186 00:22:05.707 13:52:08 -- common/autotest_common.sh@819 -- # '[' -z 1642186 ']' 00:22:05.707 13:52:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.707 13:52:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:05.707 13:52:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.707 13:52:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:05.707 13:52:08 -- common/autotest_common.sh@10 -- # set +x 00:22:05.707 [2024-07-11 13:52:08.159353] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:05.707 [2024-07-11 13:52:08.159397] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.966 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.966 [2024-07-11 13:52:08.215113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.966 [2024-07-11 13:52:08.252909] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:05.966 [2024-07-11 13:52:08.253015] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.966 [2024-07-11 13:52:08.253022] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.966 [2024-07-11 13:52:08.253029] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.966 [2024-07-11 13:52:08.253043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.535 13:52:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:06.535 13:52:08 -- common/autotest_common.sh@852 -- # return 0 00:22:06.535 13:52:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:06.535 13:52:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:06.535 13:52:08 -- common/autotest_common.sh@10 -- # set +x 00:22:06.535 13:52:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.535 13:52:08 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:06.535 13:52:08 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:06.535 13:52:08 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:06.794 [2024-07-11 13:52:09.136178] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.794 13:52:09 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:07.054 13:52:09 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:07.054 [2024-07-11 13:52:09.461010] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:07.054 [2024-07-11 13:52:09.461205] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.054 13:52:09 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:07.313 malloc0 00:22:07.313 13:52:09 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:07.572 13:52:09 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:07.572 13:52:09 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:07.572 13:52:09 -- target/tls.sh@197 -- # bdevperf_pid=1642453 00:22:07.572 13:52:09 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:07.572 13:52:09 -- target/tls.sh@200 -- # waitforlisten 1642453 /var/tmp/bdevperf.sock 00:22:07.572 13:52:09 -- common/autotest_common.sh@819 -- # '[' -z 1642453 ']' 00:22:07.572 13:52:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.572 13:52:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:07.572 13:52:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.572 13:52:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:07.572 13:52:09 -- common/autotest_common.sh@10 -- # set +x 00:22:07.572 [2024-07-11 13:52:10.002267] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:07.572 [2024-07-11 13:52:10.002316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642453 ] 00:22:07.572 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.831 [2024-07-11 13:52:10.055150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.831 [2024-07-11 13:52:10.092874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.398 13:52:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:08.398 13:52:10 -- common/autotest_common.sh@852 -- # return 0 00:22:08.398 13:52:10 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:08.657 [2024-07-11 13:52:10.934240] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:08.657 TLSTESTn1 00:22:08.657 13:52:11 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:08.917 13:52:11 -- target/tls.sh@205 -- # tgtconf='{ 00:22:08.917 "subsystems": [ 00:22:08.917 { 00:22:08.917 "subsystem": "iobuf", 00:22:08.917 "config": [ 00:22:08.917 { 00:22:08.917 "method": "iobuf_set_options", 00:22:08.917 "params": { 00:22:08.917 "small_pool_count": 8192, 00:22:08.917 "large_pool_count": 1024, 00:22:08.917 "small_bufsize": 8192, 00:22:08.917 "large_bufsize": 135168 00:22:08.917 } 00:22:08.917 } 00:22:08.917 ] 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "subsystem": "sock", 00:22:08.917 "config": [ 00:22:08.917 { 00:22:08.917 "method": "sock_impl_set_options", 00:22:08.917 "params": { 00:22:08.917 "impl_name": "posix", 00:22:08.917 "recv_buf_size": 2097152, 00:22:08.917 "send_buf_size": 2097152, 00:22:08.917 "enable_recv_pipe": true, 00:22:08.917 "enable_quickack": false, 00:22:08.917 "enable_placement_id": 0, 00:22:08.917 "enable_zerocopy_send_server": true, 00:22:08.917 "enable_zerocopy_send_client": false, 00:22:08.917 "zerocopy_threshold": 0, 00:22:08.917 "tls_version": 0, 00:22:08.917 "enable_ktls": false 00:22:08.917 } 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "method": "sock_impl_set_options", 00:22:08.917 "params": { 00:22:08.917 "impl_name": "ssl", 00:22:08.917 "recv_buf_size": 4096, 00:22:08.917 "send_buf_size": 4096, 00:22:08.917 "enable_recv_pipe": true, 00:22:08.917 "enable_quickack": false, 00:22:08.917 "enable_placement_id": 0, 00:22:08.917 "enable_zerocopy_send_server": true, 00:22:08.917 "enable_zerocopy_send_client": false, 00:22:08.917 "zerocopy_threshold": 0, 00:22:08.917 "tls_version": 0, 00:22:08.917 "enable_ktls": false 00:22:08.917 } 00:22:08.917 } 00:22:08.917 ] 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "subsystem": "vmd", 00:22:08.917 "config": [] 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "subsystem": "accel", 00:22:08.917 "config": [ 00:22:08.917 { 00:22:08.917 "method": "accel_set_options", 00:22:08.917 "params": { 00:22:08.917 "small_cache_size": 128, 00:22:08.917 "large_cache_size": 16, 00:22:08.917 "task_count": 2048, 00:22:08.917 "sequence_count": 2048, 00:22:08.917 "buf_count": 2048 00:22:08.917 } 00:22:08.917 } 00:22:08.917 ] 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "subsystem": "bdev", 00:22:08.917 "config": [ 00:22:08.917 { 00:22:08.917 "method": "bdev_set_options", 00:22:08.917 "params": { 00:22:08.917 "bdev_io_pool_size": 65535, 00:22:08.917 "bdev_io_cache_size": 256, 00:22:08.917 "bdev_auto_examine": true, 00:22:08.917 "iobuf_small_cache_size": 128, 00:22:08.917 "iobuf_large_cache_size": 16 00:22:08.917 } 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "method": "bdev_raid_set_options", 00:22:08.917 "params": { 00:22:08.917 "process_window_size_kb": 1024 00:22:08.917 } 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "method": "bdev_iscsi_set_options", 00:22:08.917 "params": { 00:22:08.917 "timeout_sec": 30 00:22:08.917 } 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "method": "bdev_nvme_set_options", 00:22:08.917 "params": { 00:22:08.917 "action_on_timeout": "none", 00:22:08.917 "timeout_us": 0, 00:22:08.917 "timeout_admin_us": 0, 00:22:08.917 "keep_alive_timeout_ms": 10000, 00:22:08.917 "transport_retry_count": 4, 00:22:08.917 "arbitration_burst": 0, 00:22:08.917 "low_priority_weight": 0, 00:22:08.917 "medium_priority_weight": 0, 00:22:08.917 "high_priority_weight": 0, 00:22:08.917 "nvme_adminq_poll_period_us": 10000, 00:22:08.917 "nvme_ioq_poll_period_us": 0, 00:22:08.917 "io_queue_requests": 0, 00:22:08.917 "delay_cmd_submit": true, 00:22:08.917 "bdev_retry_count": 3, 00:22:08.917 "transport_ack_timeout": 0, 00:22:08.917 "ctrlr_loss_timeout_sec": 0, 00:22:08.917 "reconnect_delay_sec": 0, 00:22:08.917 "fast_io_fail_timeout_sec": 0, 00:22:08.917 "generate_uuids": false, 00:22:08.917 "transport_tos": 0, 00:22:08.917 "io_path_stat": false, 00:22:08.917 "allow_accel_sequence": false 00:22:08.917 } 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "method": "bdev_nvme_set_hotplug", 00:22:08.917 "params": { 00:22:08.917 "period_us": 100000, 00:22:08.917 "enable": false 00:22:08.917 } 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "method": "bdev_malloc_create", 00:22:08.917 "params": { 00:22:08.917 "name": "malloc0", 00:22:08.917 "num_blocks": 8192, 00:22:08.917 "block_size": 4096, 00:22:08.917 "physical_block_size": 4096, 00:22:08.917 "uuid": "bfe0a27b-ea66-4ec7-b411-77d0b4ffd06a", 00:22:08.917 "optimal_io_boundary": 0 00:22:08.917 } 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "method": "bdev_wait_for_examine" 00:22:08.917 } 00:22:08.917 ] 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "subsystem": "nbd", 00:22:08.917 "config": [] 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "subsystem": "scheduler", 00:22:08.917 "config": [ 00:22:08.917 { 00:22:08.917 "method": "framework_set_scheduler", 00:22:08.917 "params": { 00:22:08.917 "name": "static" 00:22:08.917 } 00:22:08.917 } 00:22:08.917 ] 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "subsystem": "nvmf", 00:22:08.917 "config": [ 00:22:08.917 { 00:22:08.917 "method": "nvmf_set_config", 00:22:08.917 "params": { 00:22:08.917 "discovery_filter": "match_any", 00:22:08.917 "admin_cmd_passthru": { 00:22:08.917 "identify_ctrlr": false 00:22:08.917 } 00:22:08.917 } 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "method": "nvmf_set_max_subsystems", 00:22:08.917 "params": { 00:22:08.917 "max_subsystems": 1024 00:22:08.917 } 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "method": "nvmf_set_crdt", 00:22:08.917 "params": { 00:22:08.917 "crdt1": 0, 00:22:08.917 "crdt2": 0, 00:22:08.917 "crdt3": 0 00:22:08.917 } 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "method": "nvmf_create_transport", 00:22:08.917 "params": { 00:22:08.917 "trtype": "TCP", 00:22:08.917 "max_queue_depth": 128, 00:22:08.917 "max_io_qpairs_per_ctrlr": 127, 00:22:08.917 "in_capsule_data_size": 4096, 00:22:08.917 "max_io_size": 131072, 00:22:08.917 "io_unit_size": 131072, 00:22:08.917 "max_aq_depth": 128, 00:22:08.917 "num_shared_buffers": 511, 00:22:08.917 "buf_cache_size": 4294967295, 00:22:08.917 "dif_insert_or_strip": false, 00:22:08.917 "zcopy": false, 00:22:08.917 "c2h_success": false, 00:22:08.917 "sock_priority": 0, 00:22:08.917 "abort_timeout_sec": 1 00:22:08.917 } 00:22:08.917 }, 00:22:08.917 { 00:22:08.917 "method": "nvmf_create_subsystem", 00:22:08.917 "params": { 00:22:08.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.918 "allow_any_host": false, 00:22:08.918 "serial_number": "SPDK00000000000001", 00:22:08.918 "model_number": "SPDK bdev Controller", 00:22:08.918 "max_namespaces": 10, 00:22:08.918 "min_cntlid": 1, 00:22:08.918 "max_cntlid": 65519, 00:22:08.918 "ana_reporting": false 00:22:08.918 } 00:22:08.918 }, 00:22:08.918 { 00:22:08.918 "method": "nvmf_subsystem_add_host", 00:22:08.918 "params": { 00:22:08.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.918 "host": "nqn.2016-06.io.spdk:host1", 00:22:08.918 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:08.918 } 00:22:08.918 }, 00:22:08.918 { 00:22:08.918 "method": "nvmf_subsystem_add_ns", 00:22:08.918 "params": { 00:22:08.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.918 "namespace": { 00:22:08.918 "nsid": 1, 00:22:08.918 "bdev_name": "malloc0", 00:22:08.918 "nguid": "BFE0A27BEA664EC7B41177D0B4FFD06A", 00:22:08.918 "uuid": "bfe0a27b-ea66-4ec7-b411-77d0b4ffd06a" 00:22:08.918 } 00:22:08.918 } 00:22:08.918 }, 00:22:08.918 { 00:22:08.918 "method": "nvmf_subsystem_add_listener", 00:22:08.918 "params": { 00:22:08.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.918 "listen_address": { 00:22:08.918 "trtype": "TCP", 00:22:08.918 "adrfam": "IPv4", 00:22:08.918 "traddr": "10.0.0.2", 00:22:08.918 "trsvcid": "4420" 00:22:08.918 }, 00:22:08.918 "secure_channel": true 00:22:08.918 } 00:22:08.918 } 00:22:08.918 ] 00:22:08.918 } 00:22:08.918 ] 00:22:08.918 }' 00:22:08.918 13:52:11 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:09.177 13:52:11 -- target/tls.sh@206 -- # bdevperfconf='{ 00:22:09.177 "subsystems": [ 00:22:09.177 { 00:22:09.177 "subsystem": "iobuf", 00:22:09.177 "config": [ 00:22:09.177 { 00:22:09.177 "method": "iobuf_set_options", 00:22:09.177 "params": { 00:22:09.177 "small_pool_count": 8192, 00:22:09.177 "large_pool_count": 1024, 00:22:09.177 "small_bufsize": 8192, 00:22:09.177 "large_bufsize": 135168 00:22:09.177 } 00:22:09.177 } 00:22:09.177 ] 00:22:09.177 }, 00:22:09.177 { 00:22:09.177 "subsystem": "sock", 00:22:09.177 "config": [ 00:22:09.177 { 00:22:09.177 "method": "sock_impl_set_options", 00:22:09.177 "params": { 00:22:09.177 "impl_name": "posix", 00:22:09.177 "recv_buf_size": 2097152, 00:22:09.177 "send_buf_size": 2097152, 00:22:09.177 "enable_recv_pipe": true, 00:22:09.177 "enable_quickack": false, 00:22:09.177 "enable_placement_id": 0, 00:22:09.177 "enable_zerocopy_send_server": true, 00:22:09.177 "enable_zerocopy_send_client": false, 00:22:09.177 "zerocopy_threshold": 0, 00:22:09.177 "tls_version": 0, 00:22:09.177 "enable_ktls": false 00:22:09.177 } 00:22:09.177 }, 00:22:09.177 { 00:22:09.177 "method": "sock_impl_set_options", 00:22:09.177 "params": { 00:22:09.177 "impl_name": "ssl", 00:22:09.177 "recv_buf_size": 4096, 00:22:09.177 "send_buf_size": 4096, 00:22:09.177 "enable_recv_pipe": true, 00:22:09.177 "enable_quickack": false, 00:22:09.177 "enable_placement_id": 0, 00:22:09.177 "enable_zerocopy_send_server": true, 00:22:09.177 "enable_zerocopy_send_client": false, 00:22:09.177 "zerocopy_threshold": 0, 00:22:09.177 "tls_version": 0, 00:22:09.177 "enable_ktls": false 00:22:09.177 } 00:22:09.177 } 00:22:09.177 ] 00:22:09.177 }, 00:22:09.177 { 00:22:09.177 "subsystem": "vmd", 00:22:09.177 "config": [] 00:22:09.177 }, 00:22:09.177 { 00:22:09.177 "subsystem": "accel", 00:22:09.177 "config": [ 00:22:09.177 { 00:22:09.177 "method": "accel_set_options", 00:22:09.177 "params": { 00:22:09.177 "small_cache_size": 128, 00:22:09.177 "large_cache_size": 16, 00:22:09.177 "task_count": 2048, 00:22:09.177 "sequence_count": 2048, 00:22:09.177 "buf_count": 2048 00:22:09.177 } 00:22:09.177 } 00:22:09.177 ] 00:22:09.177 }, 00:22:09.177 { 00:22:09.177 "subsystem": "bdev", 00:22:09.177 "config": [ 00:22:09.177 { 00:22:09.177 "method": "bdev_set_options", 00:22:09.177 "params": { 00:22:09.177 "bdev_io_pool_size": 65535, 00:22:09.177 "bdev_io_cache_size": 256, 00:22:09.177 "bdev_auto_examine": true, 00:22:09.177 "iobuf_small_cache_size": 128, 00:22:09.177 "iobuf_large_cache_size": 16 00:22:09.177 } 00:22:09.177 }, 00:22:09.177 { 00:22:09.177 "method": "bdev_raid_set_options", 00:22:09.177 "params": { 00:22:09.177 "process_window_size_kb": 1024 00:22:09.177 } 00:22:09.177 }, 00:22:09.177 { 00:22:09.177 "method": "bdev_iscsi_set_options", 00:22:09.177 "params": { 00:22:09.177 "timeout_sec": 30 00:22:09.177 } 00:22:09.177 }, 00:22:09.177 { 00:22:09.177 "method": "bdev_nvme_set_options", 00:22:09.177 "params": { 00:22:09.177 "action_on_timeout": "none", 00:22:09.177 "timeout_us": 0, 00:22:09.177 "timeout_admin_us": 0, 00:22:09.177 "keep_alive_timeout_ms": 10000, 00:22:09.177 "transport_retry_count": 4, 00:22:09.177 "arbitration_burst": 0, 00:22:09.177 "low_priority_weight": 0, 00:22:09.177 "medium_priority_weight": 0, 00:22:09.177 "high_priority_weight": 0, 00:22:09.177 "nvme_adminq_poll_period_us": 10000, 00:22:09.177 "nvme_ioq_poll_period_us": 0, 00:22:09.177 "io_queue_requests": 512, 00:22:09.177 "delay_cmd_submit": true, 00:22:09.177 "bdev_retry_count": 3, 00:22:09.177 "transport_ack_timeout": 0, 00:22:09.177 "ctrlr_loss_timeout_sec": 0, 00:22:09.177 "reconnect_delay_sec": 0, 00:22:09.177 "fast_io_fail_timeout_sec": 0, 00:22:09.177 "generate_uuids": false, 00:22:09.177 "transport_tos": 0, 00:22:09.177 "io_path_stat": false, 00:22:09.177 "allow_accel_sequence": false 00:22:09.177 } 00:22:09.177 }, 00:22:09.177 { 00:22:09.177 "method": "bdev_nvme_attach_controller", 00:22:09.177 "params": { 00:22:09.177 "name": "TLSTEST", 00:22:09.177 "trtype": "TCP", 00:22:09.177 "adrfam": "IPv4", 00:22:09.177 "traddr": "10.0.0.2", 00:22:09.177 "trsvcid": "4420", 00:22:09.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.177 "prchk_reftag": false, 00:22:09.177 "prchk_guard": false, 00:22:09.177 "ctrlr_loss_timeout_sec": 0, 00:22:09.177 "reconnect_delay_sec": 0, 00:22:09.177 "fast_io_fail_timeout_sec": 0, 00:22:09.177 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:09.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.177 "hdgst": false, 00:22:09.177 "ddgst": false 00:22:09.177 } 00:22:09.177 }, 00:22:09.177 { 00:22:09.177 "method": "bdev_nvme_set_hotplug", 00:22:09.177 "params": { 00:22:09.177 "period_us": 100000, 00:22:09.177 "enable": false 00:22:09.178 } 00:22:09.178 }, 00:22:09.178 { 00:22:09.178 "method": "bdev_wait_for_examine" 00:22:09.178 } 00:22:09.178 ] 00:22:09.178 }, 00:22:09.178 { 00:22:09.178 "subsystem": "nbd", 00:22:09.178 "config": [] 00:22:09.178 } 00:22:09.178 ] 00:22:09.178 }' 00:22:09.178 13:52:11 -- target/tls.sh@208 -- # killprocess 1642453 00:22:09.178 13:52:11 -- common/autotest_common.sh@926 -- # '[' -z 1642453 ']' 00:22:09.178 13:52:11 -- common/autotest_common.sh@930 -- # kill -0 1642453 00:22:09.178 13:52:11 -- common/autotest_common.sh@931 -- # uname 00:22:09.178 13:52:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:09.178 13:52:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1642453 00:22:09.178 13:52:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:09.178 13:52:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:09.178 13:52:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1642453' 00:22:09.178 killing process with pid 1642453 00:22:09.178 13:52:11 -- common/autotest_common.sh@945 -- # kill 1642453 00:22:09.178 Received shutdown signal, test time was about 10.000000 seconds 00:22:09.178 00:22:09.178 Latency(us) 00:22:09.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.178 =================================================================================================================== 00:22:09.178 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:09.178 13:52:11 -- common/autotest_common.sh@950 -- # wait 1642453 00:22:09.436 13:52:11 -- target/tls.sh@209 -- # killprocess 1642186 00:22:09.436 13:52:11 -- common/autotest_common.sh@926 -- # '[' -z 1642186 ']' 00:22:09.436 13:52:11 -- common/autotest_common.sh@930 -- # kill -0 1642186 00:22:09.436 13:52:11 -- common/autotest_common.sh@931 -- # uname 00:22:09.436 13:52:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:09.436 13:52:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1642186 00:22:09.436 13:52:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:09.436 13:52:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:09.436 13:52:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1642186' 00:22:09.436 killing process with pid 1642186 00:22:09.436 13:52:11 -- common/autotest_common.sh@945 -- # kill 1642186 00:22:09.436 13:52:11 -- common/autotest_common.sh@950 -- # wait 1642186 00:22:09.696 13:52:11 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:09.696 13:52:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:09.696 13:52:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:09.696 13:52:11 -- target/tls.sh@212 -- # echo '{ 00:22:09.696 "subsystems": [ 00:22:09.696 { 00:22:09.696 "subsystem": "iobuf", 00:22:09.696 "config": [ 00:22:09.696 { 00:22:09.696 "method": "iobuf_set_options", 00:22:09.696 "params": { 00:22:09.696 "small_pool_count": 8192, 00:22:09.696 "large_pool_count": 1024, 00:22:09.696 "small_bufsize": 8192, 00:22:09.696 "large_bufsize": 135168 00:22:09.696 } 00:22:09.696 } 00:22:09.696 ] 00:22:09.696 }, 00:22:09.696 { 00:22:09.696 "subsystem": "sock", 00:22:09.696 "config": [ 00:22:09.696 { 00:22:09.696 "method": "sock_impl_set_options", 00:22:09.696 "params": { 00:22:09.696 "impl_name": "posix", 00:22:09.696 "recv_buf_size": 2097152, 00:22:09.696 "send_buf_size": 2097152, 00:22:09.696 "enable_recv_pipe": true, 00:22:09.696 "enable_quickack": false, 00:22:09.696 "enable_placement_id": 0, 00:22:09.696 "enable_zerocopy_send_server": true, 00:22:09.696 "enable_zerocopy_send_client": false, 00:22:09.696 "zerocopy_threshold": 0, 00:22:09.696 "tls_version": 0, 00:22:09.696 "enable_ktls": false 00:22:09.696 } 00:22:09.696 }, 00:22:09.696 { 00:22:09.696 "method": "sock_impl_set_options", 00:22:09.696 "params": { 00:22:09.696 "impl_name": "ssl", 00:22:09.696 "recv_buf_size": 4096, 00:22:09.696 "send_buf_size": 4096, 00:22:09.696 "enable_recv_pipe": true, 00:22:09.696 "enable_quickack": false, 00:22:09.696 "enable_placement_id": 0, 00:22:09.696 "enable_zerocopy_send_server": true, 00:22:09.696 "enable_zerocopy_send_client": false, 00:22:09.696 "zerocopy_threshold": 0, 00:22:09.696 "tls_version": 0, 00:22:09.696 "enable_ktls": false 00:22:09.696 } 00:22:09.696 } 00:22:09.696 ] 00:22:09.696 }, 00:22:09.696 { 00:22:09.696 "subsystem": "vmd", 00:22:09.696 "config": [] 00:22:09.696 }, 00:22:09.696 { 00:22:09.696 "subsystem": "accel", 00:22:09.696 "config": [ 00:22:09.696 { 00:22:09.696 "method": "accel_set_options", 00:22:09.696 "params": { 00:22:09.696 "small_cache_size": 128, 00:22:09.696 "large_cache_size": 16, 00:22:09.696 "task_count": 2048, 00:22:09.696 "sequence_count": 2048, 00:22:09.696 "buf_count": 2048 00:22:09.696 } 00:22:09.696 } 00:22:09.696 ] 00:22:09.696 }, 00:22:09.696 { 00:22:09.696 "subsystem": "bdev", 00:22:09.696 "config": [ 00:22:09.696 { 00:22:09.696 "method": "bdev_set_options", 00:22:09.696 "params": { 00:22:09.696 "bdev_io_pool_size": 65535, 00:22:09.696 "bdev_io_cache_size": 256, 00:22:09.696 "bdev_auto_examine": true, 00:22:09.696 "iobuf_small_cache_size": 128, 00:22:09.696 "iobuf_large_cache_size": 16 00:22:09.696 } 00:22:09.696 }, 00:22:09.696 { 00:22:09.696 "method": "bdev_raid_set_options", 00:22:09.696 "params": { 00:22:09.696 "process_window_size_kb": 1024 00:22:09.696 } 00:22:09.696 }, 00:22:09.696 { 00:22:09.696 "method": "bdev_iscsi_set_options", 00:22:09.696 "params": { 00:22:09.696 "timeout_sec": 30 00:22:09.696 } 00:22:09.696 }, 00:22:09.696 { 00:22:09.696 "method": "bdev_nvme_set_options", 00:22:09.696 "params": { 00:22:09.696 "action_on_timeout": "none", 00:22:09.696 "timeout_us": 0, 00:22:09.696 "timeout_admin_us": 0, 00:22:09.696 "keep_alive_timeout_ms": 10000, 00:22:09.696 "transport_retry_count": 4, 00:22:09.696 "arbitration_burst": 0, 00:22:09.696 "low_priority_weight": 0, 00:22:09.696 "medium_priority_weight": 0, 00:22:09.696 "high_priority_weight": 0, 00:22:09.696 "nvme_adminq_poll_period_us": 10000, 00:22:09.696 "nvme_ioq_poll_period_us": 0, 00:22:09.696 "io_queue_requests": 0, 00:22:09.696 "delay_cmd_submit": true, 00:22:09.696 "bdev_retry_count": 3, 00:22:09.696 "transport_ack_timeout": 0, 00:22:09.696 "ctrlr_loss_timeout_sec": 0, 00:22:09.696 "reconnect_delay_sec": 0, 00:22:09.696 "fast_io_fail_timeout_sec": 0, 00:22:09.696 "generate_uuids": false, 00:22:09.696 "transport_tos": 0, 00:22:09.696 "io_path_stat": false, 00:22:09.696 "allow_accel_sequence": false 00:22:09.696 } 00:22:09.696 }, 00:22:09.696 { 00:22:09.696 "method": "bdev_nvme_set_hotplug", 00:22:09.696 "params": { 00:22:09.696 "period_us": 100000, 00:22:09.696 "enable": false 00:22:09.696 } 00:22:09.696 }, 00:22:09.696 { 00:22:09.696 "method": "bdev_malloc_create", 00:22:09.696 "params": { 00:22:09.696 "name": "malloc0", 00:22:09.696 "num_blocks": 8192, 00:22:09.696 "block_size": 4096, 00:22:09.696 "physical_block_size": 4096, 00:22:09.696 "uuid": "bfe0a27b-ea66-4ec7-b411-77d0b4ffd06a", 00:22:09.697 "optimal_io_boundary": 0 00:22:09.697 } 00:22:09.697 }, 00:22:09.697 { 00:22:09.697 "method": "bdev_wait_for_examine" 00:22:09.697 } 00:22:09.697 ] 00:22:09.697 }, 00:22:09.697 { 00:22:09.697 "subsystem": "nbd", 00:22:09.697 "config": [] 00:22:09.697 }, 00:22:09.697 { 00:22:09.697 "subsystem": "scheduler", 00:22:09.697 "config": [ 00:22:09.697 { 00:22:09.697 "method": "framework_set_scheduler", 00:22:09.697 "params": { 00:22:09.697 "name": "static" 00:22:09.697 } 00:22:09.697 } 00:22:09.697 ] 00:22:09.697 }, 00:22:09.697 { 00:22:09.697 "subsystem": "nvmf", 00:22:09.697 "config": [ 00:22:09.697 { 00:22:09.697 "method": "nvmf_set_config", 00:22:09.697 "params": { 00:22:09.697 "discovery_filter": "match_any", 00:22:09.697 "admin_cmd_passthru": { 00:22:09.697 "identify_ctrlr": false 00:22:09.697 } 00:22:09.697 } 00:22:09.697 }, 00:22:09.697 { 00:22:09.697 "method": "nvmf_set_max_subsystems", 00:22:09.697 "params": { 00:22:09.697 "max_subsystems": 1024 00:22:09.697 } 00:22:09.697 }, 00:22:09.697 { 00:22:09.697 "method": "nvmf_set_crdt", 00:22:09.697 "params": { 00:22:09.697 "crdt1": 0, 00:22:09.697 "crdt2": 0, 00:22:09.697 "crdt3": 0 00:22:09.697 } 00:22:09.697 }, 00:22:09.697 { 00:22:09.697 "method": "nvmf_create_transport", 00:22:09.697 "params": { 00:22:09.697 "trtype": "TCP", 00:22:09.697 "max_queue_depth": 128, 00:22:09.697 "max_io_qpairs_per_ctrlr": 127, 00:22:09.697 "in_capsule_data_size": 4096, 00:22:09.697 "max_io_size": 131072, 00:22:09.697 "io_unit_size": 131072, 00:22:09.697 "max_aq_depth": 128, 00:22:09.697 "num_shared_buffers": 511, 00:22:09.697 "buf_cache_size": 4294967295, 00:22:09.697 "dif_insert_or_strip": false, 00:22:09.697 "zcopy": false, 00:22:09.697 "c2h_success": false, 00:22:09.697 "sock_priority": 0, 00:22:09.697 "abort_timeout_sec": 1 00:22:09.697 } 00:22:09.697 }, 00:22:09.697 { 00:22:09.697 "method": "nvmf_create_subsystem", 00:22:09.697 "params": { 00:22:09.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.697 "allow_any_host": false, 00:22:09.697 "serial_number": "SPDK00000000000001", 00:22:09.697 "model_number": "SPDK bdev Controller", 00:22:09.697 "max_namespaces": 10, 00:22:09.697 "min_cntlid": 1, 00:22:09.697 "max_cntlid": 65519, 00:22:09.697 "ana_reporting": false 00:22:09.697 } 00:22:09.697 }, 00:22:09.697 { 00:22:09.697 "method": "nvmf_subsystem_add_host", 00:22:09.697 "params": { 00:22:09.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.697 "host": "nqn.2016-06.io.spdk:host1", 00:22:09.697 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:22:09.697 } 00:22:09.697 }, 00:22:09.697 { 00:22:09.697 "method": "nvmf_subsystem_add_ns", 00:22:09.697 "params": { 00:22:09.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.697 "namespace": { 00:22:09.697 "nsid": 1, 00:22:09.697 "bdev_name": "malloc0", 00:22:09.697 "nguid": "BFE0A27BEA664EC7B41177D0B4FFD06A", 00:22:09.697 "uuid": "bfe0a27b-ea66-4ec7-b411-77d0b4ffd06a" 00:22:09.697 } 00:22:09.697 } 00:22:09.697 }, 00:22:09.697 { 00:22:09.697 "method": "nvmf_subsystem_add_listener", 00:22:09.697 "params": { 00:22:09.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.697 "listen_address": { 00:22:09.697 "trtype": "TCP", 00:22:09.697 "adrfam": "IPv4", 00:22:09.697 "traddr": "10.0.0.2", 00:22:09.697 "trsvcid": "4420" 00:22:09.697 }, 00:22:09.697 "secure_channel": true 00:22:09.697 } 00:22:09.697 } 00:22:09.697 ] 00:22:09.697 } 00:22:09.697 ] 00:22:09.697 }' 00:22:09.697 13:52:11 -- common/autotest_common.sh@10 -- # set +x 00:22:09.697 13:52:11 -- nvmf/common.sh@469 -- # nvmfpid=1642928 00:22:09.697 13:52:11 -- nvmf/common.sh@470 -- # waitforlisten 1642928 00:22:09.697 13:52:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:09.697 13:52:11 -- common/autotest_common.sh@819 -- # '[' -z 1642928 ']' 00:22:09.697 13:52:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.697 13:52:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:09.697 13:52:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.697 13:52:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:09.697 13:52:11 -- common/autotest_common.sh@10 -- # set +x 00:22:09.697 [2024-07-11 13:52:11.975963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:09.697 [2024-07-11 13:52:11.976007] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.697 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.697 [2024-07-11 13:52:12.033064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.697 [2024-07-11 13:52:12.069441] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:09.697 [2024-07-11 13:52:12.069559] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.697 [2024-07-11 13:52:12.069567] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.697 [2024-07-11 13:52:12.069573] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.697 [2024-07-11 13:52:12.069589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.957 [2024-07-11 13:52:12.258782] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.957 [2024-07-11 13:52:12.290814] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:09.957 [2024-07-11 13:52:12.290998] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.526 13:52:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:10.526 13:52:12 -- common/autotest_common.sh@852 -- # return 0 00:22:10.526 13:52:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:10.526 13:52:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:10.526 13:52:12 -- common/autotest_common.sh@10 -- # set +x 00:22:10.526 13:52:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.526 13:52:12 -- target/tls.sh@216 -- # bdevperf_pid=1642956 00:22:10.526 13:52:12 -- target/tls.sh@217 -- # waitforlisten 1642956 /var/tmp/bdevperf.sock 00:22:10.526 13:52:12 -- common/autotest_common.sh@819 -- # '[' -z 1642956 ']' 00:22:10.526 13:52:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.526 13:52:12 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:10.526 13:52:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:10.526 13:52:12 -- target/tls.sh@213 -- # echo '{ 00:22:10.526 "subsystems": [ 00:22:10.526 { 00:22:10.526 "subsystem": "iobuf", 00:22:10.526 "config": [ 00:22:10.526 { 00:22:10.526 "method": "iobuf_set_options", 00:22:10.526 "params": { 00:22:10.526 "small_pool_count": 8192, 00:22:10.526 "large_pool_count": 1024, 00:22:10.526 "small_bufsize": 8192, 00:22:10.526 "large_bufsize": 135168 00:22:10.526 } 00:22:10.526 } 00:22:10.526 ] 00:22:10.526 }, 00:22:10.526 { 00:22:10.526 "subsystem": "sock", 00:22:10.526 "config": [ 00:22:10.526 { 00:22:10.526 "method": "sock_impl_set_options", 00:22:10.526 "params": { 00:22:10.526 "impl_name": "posix", 00:22:10.526 "recv_buf_size": 2097152, 00:22:10.526 "send_buf_size": 2097152, 00:22:10.526 "enable_recv_pipe": true, 00:22:10.526 "enable_quickack": false, 00:22:10.526 "enable_placement_id": 0, 00:22:10.526 "enable_zerocopy_send_server": true, 00:22:10.526 "enable_zerocopy_send_client": false, 00:22:10.526 "zerocopy_threshold": 0, 00:22:10.526 "tls_version": 0, 00:22:10.526 "enable_ktls": false 00:22:10.526 } 00:22:10.526 }, 00:22:10.526 { 00:22:10.526 "method": "sock_impl_set_options", 00:22:10.526 "params": { 00:22:10.526 "impl_name": "ssl", 00:22:10.526 "recv_buf_size": 4096, 00:22:10.526 "send_buf_size": 4096, 00:22:10.526 "enable_recv_pipe": true, 00:22:10.526 "enable_quickack": false, 00:22:10.526 "enable_placement_id": 0, 00:22:10.526 "enable_zerocopy_send_server": true, 00:22:10.526 "enable_zerocopy_send_client": false, 00:22:10.526 "zerocopy_threshold": 0, 00:22:10.526 "tls_version": 0, 00:22:10.526 "enable_ktls": false 00:22:10.526 } 00:22:10.526 } 00:22:10.526 ] 00:22:10.526 }, 00:22:10.526 { 00:22:10.526 "subsystem": "vmd", 00:22:10.526 "config": [] 00:22:10.526 }, 00:22:10.526 { 00:22:10.526 "subsystem": "accel", 00:22:10.526 "config": [ 00:22:10.526 { 00:22:10.526 "method": "accel_set_options", 00:22:10.526 "params": { 00:22:10.526 "small_cache_size": 128, 00:22:10.526 "large_cache_size": 16, 00:22:10.526 "task_count": 2048, 00:22:10.526 "sequence_count": 2048, 00:22:10.526 "buf_count": 2048 00:22:10.526 } 00:22:10.526 } 00:22:10.526 ] 00:22:10.526 }, 00:22:10.526 { 00:22:10.526 "subsystem": "bdev", 00:22:10.526 "config": [ 00:22:10.526 { 00:22:10.526 "method": "bdev_set_options", 00:22:10.526 "params": { 00:22:10.526 "bdev_io_pool_size": 65535, 00:22:10.526 "bdev_io_cache_size": 256, 00:22:10.526 "bdev_auto_examine": true, 00:22:10.526 "iobuf_small_cache_size": 128, 00:22:10.526 "iobuf_large_cache_size": 16 00:22:10.526 } 00:22:10.526 }, 00:22:10.526 { 00:22:10.526 "method": "bdev_raid_set_options", 00:22:10.526 "params": { 00:22:10.526 "process_window_size_kb": 1024 00:22:10.526 } 00:22:10.526 }, 00:22:10.526 { 00:22:10.526 "method": "bdev_iscsi_set_options", 00:22:10.526 "params": { 00:22:10.526 "timeout_sec": 30 00:22:10.526 } 00:22:10.526 }, 00:22:10.526 { 00:22:10.526 "method": "bdev_nvme_set_options", 00:22:10.526 "params": { 00:22:10.526 "action_on_timeout": "none", 00:22:10.526 "timeout_us": 0, 00:22:10.526 "timeout_admin_us": 0, 00:22:10.526 "keep_alive_timeout_ms": 10000, 00:22:10.526 "transport_retry_count": 4, 00:22:10.526 "arbitration_burst": 0, 00:22:10.526 "low_priority_weight": 0, 00:22:10.526 "medium_priority_weight": 0, 00:22:10.526 "high_priority_weight": 0, 00:22:10.526 "nvme_adminq_poll_period_us": 10000, 00:22:10.526 "nvme_ioq_poll_period_us": 0, 00:22:10.526 "io_queue_requests": 512, 00:22:10.526 "delay_cmd_submit": true, 00:22:10.526 "bdev_retry_count": 3, 00:22:10.526 "transport_ack_timeout": 0, 00:22:10.526 "ctrlr_loss_timeout_sec": 0, 00:22:10.526 "reconnect_delay_sec": 0, 00:22:10.526 "fast_io_fail_timeout_sec": 0, 00:22:10.526 "generate_uuids": false, 00:22:10.526 "transport_tos": 0, 00:22:10.526 "io_path_stat": false, 00:22:10.526 "allow_accel_sequence": false 00:22:10.526 } 00:22:10.526 }, 00:22:10.526 { 00:22:10.526 "method": "bdev_nvme_attach_controller", 00:22:10.526 "params": { 00:22:10.526 "name": "TLSTEST", 00:22:10.526 "trtype": "TCP", 00:22:10.526 "adrfam": "IPv4", 00:22:10.526 "traddr": "10.0.0.2", 00:22:10.526 "trsvcid": "4420", 00:22:10.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.526 "prchk_reftag": false, 00:22:10.526 "prchk_guard": false, 00:22:10.526 "ctrlr_loss_timeout_sec": 0, 00:22:10.526 "reconnect_delay_sec": 0, 00:22:10.526 "fast_io_fail_timeout_sec": 0, 00:22:10.526 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:22:10.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.526 "hdgst": 13:52:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.526 false, 00:22:10.526 "ddgst": false 00:22:10.526 } 00:22:10.526 }, 00:22:10.526 { 00:22:10.526 "method": "bdev_nvme_set_hotplug", 00:22:10.526 "params": { 00:22:10.526 "period_us": 100000, 00:22:10.526 "enable": false 00:22:10.526 } 00:22:10.526 }, 00:22:10.526 { 00:22:10.526 "method": "bdev_wait_for_examine" 00:22:10.526 } 00:22:10.526 ] 00:22:10.526 }, 00:22:10.526 { 00:22:10.526 "subsystem": "nbd", 00:22:10.526 "config": [] 00:22:10.526 } 00:22:10.526 ] 00:22:10.526 }' 00:22:10.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.526 13:52:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:10.526 13:52:12 -- common/autotest_common.sh@10 -- # set +x 00:22:10.526 [2024-07-11 13:52:12.832574] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:10.527 [2024-07-11 13:52:12.832618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642956 ] 00:22:10.527 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.527 [2024-07-11 13:52:12.882718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.527 [2024-07-11 13:52:12.920922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.785 [2024-07-11 13:52:13.048771] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:11.351 13:52:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:11.351 13:52:13 -- common/autotest_common.sh@852 -- # return 0 00:22:11.351 13:52:13 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:11.351 Running I/O for 10 seconds... 00:22:21.369 00:22:21.369 Latency(us) 00:22:21.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.370 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.370 Verification LBA range: start 0x0 length 0x2000 00:22:21.370 TLSTESTn1 : 10.03 5446.69 21.28 0.00 0.00 23470.14 5442.34 51516.99 00:22:21.370 =================================================================================================================== 00:22:21.370 Total : 5446.69 21.28 0.00 0.00 23470.14 5442.34 51516.99 00:22:21.370 0 00:22:21.370 13:52:23 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.370 13:52:23 -- target/tls.sh@223 -- # killprocess 1642956 00:22:21.370 13:52:23 -- common/autotest_common.sh@926 -- # '[' -z 1642956 ']' 00:22:21.370 13:52:23 -- common/autotest_common.sh@930 -- # kill -0 1642956 00:22:21.370 13:52:23 -- common/autotest_common.sh@931 -- # uname 00:22:21.370 13:52:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:21.370 13:52:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1642956 00:22:21.370 13:52:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:21.370 13:52:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:21.370 13:52:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1642956' 00:22:21.370 killing process with pid 1642956 00:22:21.370 13:52:23 -- common/autotest_common.sh@945 -- # kill 1642956 00:22:21.370 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.370 00:22:21.370 Latency(us) 00:22:21.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.370 =================================================================================================================== 00:22:21.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.370 13:52:23 -- common/autotest_common.sh@950 -- # wait 1642956 00:22:21.627 13:52:23 -- target/tls.sh@224 -- # killprocess 1642928 00:22:21.627 13:52:23 -- common/autotest_common.sh@926 -- # '[' -z 1642928 ']' 00:22:21.627 13:52:23 -- common/autotest_common.sh@930 -- # kill -0 1642928 00:22:21.627 13:52:23 -- common/autotest_common.sh@931 -- # uname 00:22:21.627 13:52:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:21.627 13:52:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1642928 00:22:21.627 13:52:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:21.627 13:52:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:21.627 13:52:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1642928' 00:22:21.627 killing process with pid 1642928 00:22:21.627 13:52:24 -- common/autotest_common.sh@945 -- # kill 1642928 00:22:21.627 13:52:24 -- common/autotest_common.sh@950 -- # wait 1642928 00:22:21.885 13:52:24 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:22:21.885 13:52:24 -- target/tls.sh@227 -- # cleanup 00:22:21.885 13:52:24 -- target/tls.sh@15 -- # process_shm --id 0 00:22:21.885 13:52:24 -- common/autotest_common.sh@796 -- # type=--id 00:22:21.885 13:52:24 -- common/autotest_common.sh@797 -- # id=0 00:22:21.885 13:52:24 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:21.885 13:52:24 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:21.885 13:52:24 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:21.885 13:52:24 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:21.885 13:52:24 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:21.885 13:52:24 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:21.885 nvmf_trace.0 00:22:21.885 13:52:24 -- common/autotest_common.sh@811 -- # return 0 00:22:21.885 13:52:24 -- target/tls.sh@16 -- # killprocess 1642956 00:22:21.885 13:52:24 -- common/autotest_common.sh@926 -- # '[' -z 1642956 ']' 00:22:21.885 13:52:24 -- common/autotest_common.sh@930 -- # kill -0 1642956 00:22:21.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1642956) - No such process 00:22:21.885 13:52:24 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1642956 is not found' 00:22:21.885 Process with pid 1642956 is not found 00:22:21.885 13:52:24 -- target/tls.sh@17 -- # nvmftestfini 00:22:21.885 13:52:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:21.885 13:52:24 -- nvmf/common.sh@116 -- # sync 00:22:21.885 13:52:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:21.885 13:52:24 -- nvmf/common.sh@119 -- # set +e 00:22:21.885 13:52:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:21.885 13:52:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:21.885 rmmod nvme_tcp 00:22:21.885 rmmod nvme_fabrics 00:22:21.885 rmmod nvme_keyring 00:22:22.144 13:52:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:22.144 13:52:24 -- nvmf/common.sh@123 -- # set -e 00:22:22.144 13:52:24 -- nvmf/common.sh@124 -- # return 0 00:22:22.144 13:52:24 -- nvmf/common.sh@477 -- # '[' -n 1642928 ']' 00:22:22.144 13:52:24 -- nvmf/common.sh@478 -- # killprocess 1642928 00:22:22.144 13:52:24 -- common/autotest_common.sh@926 -- # '[' -z 1642928 ']' 00:22:22.144 13:52:24 -- common/autotest_common.sh@930 -- # kill -0 1642928 00:22:22.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1642928) - No such process 00:22:22.144 13:52:24 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1642928 is not found' 00:22:22.144 Process with pid 1642928 is not found 00:22:22.144 13:52:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:22.144 13:52:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:22.144 13:52:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:22.144 13:52:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.144 13:52:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:22.144 13:52:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.144 13:52:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.144 13:52:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.046 13:52:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:24.046 13:52:26 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:22:24.046 00:22:24.046 real 1m10.921s 00:22:24.046 user 1m44.055s 00:22:24.046 sys 0m28.088s 00:22:24.046 13:52:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:24.046 13:52:26 -- common/autotest_common.sh@10 -- # set +x 00:22:24.046 ************************************ 00:22:24.046 END TEST nvmf_tls 00:22:24.046 ************************************ 00:22:24.046 13:52:26 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:24.046 13:52:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:24.046 13:52:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:24.046 13:52:26 -- common/autotest_common.sh@10 -- # set +x 00:22:24.046 ************************************ 00:22:24.046 START TEST nvmf_fips 00:22:24.046 ************************************ 00:22:24.046 13:52:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:24.306 * Looking for test storage... 00:22:24.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:24.306 13:52:26 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.306 13:52:26 -- nvmf/common.sh@7 -- # uname -s 00:22:24.306 13:52:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.306 13:52:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.306 13:52:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.306 13:52:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.306 13:52:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.306 13:52:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.306 13:52:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.306 13:52:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.306 13:52:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.306 13:52:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.306 13:52:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:24.306 13:52:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:24.306 13:52:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.306 13:52:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.306 13:52:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.306 13:52:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.306 13:52:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.306 13:52:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.306 13:52:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.306 13:52:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.306 13:52:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.306 13:52:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.306 13:52:26 -- paths/export.sh@5 -- # export PATH 00:22:24.306 13:52:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.306 13:52:26 -- nvmf/common.sh@46 -- # : 0 00:22:24.306 13:52:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:24.306 13:52:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:24.306 13:52:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:24.306 13:52:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.306 13:52:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.306 13:52:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:24.306 13:52:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:24.306 13:52:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:24.306 13:52:26 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:24.306 13:52:26 -- fips/fips.sh@89 -- # check_openssl_version 00:22:24.306 13:52:26 -- fips/fips.sh@83 -- # local target=3.0.0 00:22:24.306 13:52:26 -- fips/fips.sh@85 -- # openssl version 00:22:24.306 13:52:26 -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:24.306 13:52:26 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:24.306 13:52:26 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:24.306 13:52:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:24.306 13:52:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:24.306 13:52:26 -- scripts/common.sh@335 -- # IFS=.-: 00:22:24.306 13:52:26 -- scripts/common.sh@335 -- # read -ra ver1 00:22:24.306 13:52:26 -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.306 13:52:26 -- scripts/common.sh@336 -- # read -ra ver2 00:22:24.306 13:52:26 -- scripts/common.sh@337 -- # local 'op=>=' 00:22:24.306 13:52:26 -- scripts/common.sh@339 -- # ver1_l=3 00:22:24.306 13:52:26 -- scripts/common.sh@340 -- # ver2_l=3 00:22:24.306 13:52:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:24.306 13:52:26 -- scripts/common.sh@343 -- # case "$op" in 00:22:24.306 13:52:26 -- scripts/common.sh@347 -- # : 1 00:22:24.306 13:52:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:24.306 13:52:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.306 13:52:26 -- scripts/common.sh@364 -- # decimal 3 00:22:24.306 13:52:26 -- scripts/common.sh@352 -- # local d=3 00:22:24.306 13:52:26 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:24.306 13:52:26 -- scripts/common.sh@354 -- # echo 3 00:22:24.306 13:52:26 -- scripts/common.sh@364 -- # ver1[v]=3 00:22:24.306 13:52:26 -- scripts/common.sh@365 -- # decimal 3 00:22:24.306 13:52:26 -- scripts/common.sh@352 -- # local d=3 00:22:24.306 13:52:26 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:24.306 13:52:26 -- scripts/common.sh@354 -- # echo 3 00:22:24.306 13:52:26 -- scripts/common.sh@365 -- # ver2[v]=3 00:22:24.306 13:52:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:24.306 13:52:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:24.306 13:52:26 -- scripts/common.sh@363 -- # (( v++ )) 00:22:24.306 13:52:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.306 13:52:26 -- scripts/common.sh@364 -- # decimal 0 00:22:24.306 13:52:26 -- scripts/common.sh@352 -- # local d=0 00:22:24.306 13:52:26 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:24.306 13:52:26 -- scripts/common.sh@354 -- # echo 0 00:22:24.306 13:52:26 -- scripts/common.sh@364 -- # ver1[v]=0 00:22:24.306 13:52:26 -- scripts/common.sh@365 -- # decimal 0 00:22:24.306 13:52:26 -- scripts/common.sh@352 -- # local d=0 00:22:24.306 13:52:26 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:24.306 13:52:26 -- scripts/common.sh@354 -- # echo 0 00:22:24.306 13:52:26 -- scripts/common.sh@365 -- # ver2[v]=0 00:22:24.306 13:52:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:24.306 13:52:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:24.306 13:52:26 -- scripts/common.sh@363 -- # (( v++ )) 00:22:24.306 13:52:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.306 13:52:26 -- scripts/common.sh@364 -- # decimal 9 00:22:24.306 13:52:26 -- scripts/common.sh@352 -- # local d=9 00:22:24.306 13:52:26 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:24.306 13:52:26 -- scripts/common.sh@354 -- # echo 9 00:22:24.306 13:52:26 -- scripts/common.sh@364 -- # ver1[v]=9 00:22:24.306 13:52:26 -- scripts/common.sh@365 -- # decimal 0 00:22:24.306 13:52:26 -- scripts/common.sh@352 -- # local d=0 00:22:24.306 13:52:26 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:24.306 13:52:26 -- scripts/common.sh@354 -- # echo 0 00:22:24.306 13:52:26 -- scripts/common.sh@365 -- # ver2[v]=0 00:22:24.306 13:52:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:24.306 13:52:26 -- scripts/common.sh@366 -- # return 0 00:22:24.306 13:52:26 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:24.306 13:52:26 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:24.306 13:52:26 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:24.306 13:52:26 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:24.306 13:52:26 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:24.306 13:52:26 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:24.306 13:52:26 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:24.306 13:52:26 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:22:24.306 13:52:26 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:22:24.306 13:52:26 -- fips/fips.sh@114 -- # build_openssl_config 00:22:24.306 13:52:26 -- fips/fips.sh@37 -- # cat 00:22:24.307 13:52:26 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:24.307 13:52:26 -- fips/fips.sh@58 -- # cat - 00:22:24.307 13:52:26 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:24.307 13:52:26 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:24.307 13:52:26 -- fips/fips.sh@117 -- # mapfile -t providers 00:22:24.307 13:52:26 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:22:24.307 13:52:26 -- fips/fips.sh@117 -- # openssl list -providers 00:22:24.307 13:52:26 -- fips/fips.sh@117 -- # grep name 00:22:24.307 13:52:26 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:24.307 13:52:26 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:24.307 13:52:26 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:24.307 13:52:26 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:24.307 13:52:26 -- common/autotest_common.sh@640 -- # local es=0 00:22:24.307 13:52:26 -- fips/fips.sh@128 -- # : 00:22:24.307 13:52:26 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:24.307 13:52:26 -- common/autotest_common.sh@628 -- # local arg=openssl 00:22:24.307 13:52:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:24.307 13:52:26 -- common/autotest_common.sh@632 -- # type -t openssl 00:22:24.307 13:52:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:24.307 13:52:26 -- common/autotest_common.sh@634 -- # type -P openssl 00:22:24.307 13:52:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:24.307 13:52:26 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:22:24.307 13:52:26 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:22:24.307 13:52:26 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:22:24.307 Error setting digest 00:22:24.307 0042FD347E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:24.307 0042FD347E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:24.307 13:52:26 -- common/autotest_common.sh@643 -- # es=1 00:22:24.307 13:52:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:24.307 13:52:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:24.307 13:52:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:24.307 13:52:26 -- fips/fips.sh@131 -- # nvmftestinit 00:22:24.307 13:52:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:24.307 13:52:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.307 13:52:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:24.307 13:52:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:24.307 13:52:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:24.307 13:52:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.307 13:52:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.307 13:52:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.307 13:52:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:24.307 13:52:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:24.307 13:52:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:24.307 13:52:26 -- common/autotest_common.sh@10 -- # set +x 00:22:29.577 13:52:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:29.577 13:52:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:29.577 13:52:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:29.577 13:52:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:29.577 13:52:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:29.577 13:52:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:29.577 13:52:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:29.577 13:52:31 -- nvmf/common.sh@294 -- # net_devs=() 00:22:29.577 13:52:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:29.577 13:52:31 -- nvmf/common.sh@295 -- # e810=() 00:22:29.577 13:52:31 -- nvmf/common.sh@295 -- # local -ga e810 00:22:29.577 13:52:31 -- nvmf/common.sh@296 -- # x722=() 00:22:29.577 13:52:31 -- nvmf/common.sh@296 -- # local -ga x722 00:22:29.577 13:52:31 -- nvmf/common.sh@297 -- # mlx=() 00:22:29.577 13:52:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:29.577 13:52:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.577 13:52:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.577 13:52:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.577 13:52:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.577 13:52:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.577 13:52:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.577 13:52:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.577 13:52:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.577 13:52:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.577 13:52:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.577 13:52:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.577 13:52:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:29.577 13:52:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:29.578 13:52:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:29.578 13:52:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:29.578 13:52:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:29.578 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:29.578 13:52:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:29.578 13:52:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:29.578 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:29.578 13:52:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:29.578 13:52:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:29.578 13:52:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.578 13:52:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:29.578 13:52:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.578 13:52:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:29.578 Found net devices under 0000:86:00.0: cvl_0_0 00:22:29.578 13:52:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.578 13:52:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:29.578 13:52:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.578 13:52:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:29.578 13:52:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.578 13:52:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:29.578 Found net devices under 0000:86:00.1: cvl_0_1 00:22:29.578 13:52:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.578 13:52:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:29.578 13:52:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:29.578 13:52:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:29.578 13:52:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.578 13:52:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.578 13:52:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.578 13:52:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:29.578 13:52:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.578 13:52:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.578 13:52:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:29.578 13:52:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.578 13:52:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.578 13:52:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:29.578 13:52:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:29.578 13:52:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.578 13:52:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.578 13:52:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.578 13:52:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.578 13:52:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:29.578 13:52:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.578 13:52:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.578 13:52:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.578 13:52:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:29.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:22:29.578 00:22:29.578 --- 10.0.0.2 ping statistics --- 00:22:29.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.578 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:29.578 13:52:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:22:29.578 00:22:29.578 --- 10.0.0.1 ping statistics --- 00:22:29.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.578 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:22:29.578 13:52:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.578 13:52:31 -- nvmf/common.sh@410 -- # return 0 00:22:29.578 13:52:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:29.578 13:52:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.578 13:52:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:29.578 13:52:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.578 13:52:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:29.578 13:52:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:29.578 13:52:31 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:29.578 13:52:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:29.578 13:52:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:29.578 13:52:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.578 13:52:31 -- nvmf/common.sh@469 -- # nvmfpid=1648382 00:22:29.578 13:52:31 -- nvmf/common.sh@470 -- # waitforlisten 1648382 00:22:29.578 13:52:31 -- common/autotest_common.sh@819 -- # '[' -z 1648382 ']' 00:22:29.578 13:52:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.578 13:52:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:29.578 13:52:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.578 13:52:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:29.578 13:52:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:29.578 13:52:31 -- common/autotest_common.sh@10 -- # set +x 00:22:29.578 [2024-07-11 13:52:31.730272] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:29.578 [2024-07-11 13:52:31.730318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.578 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.578 [2024-07-11 13:52:31.787275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.578 [2024-07-11 13:52:31.826066] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:29.578 [2024-07-11 13:52:31.826176] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.578 [2024-07-11 13:52:31.826184] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.578 [2024-07-11 13:52:31.826190] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.578 [2024-07-11 13:52:31.826205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.146 13:52:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:30.146 13:52:32 -- common/autotest_common.sh@852 -- # return 0 00:22:30.146 13:52:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:30.146 13:52:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:30.146 13:52:32 -- common/autotest_common.sh@10 -- # set +x 00:22:30.146 13:52:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.146 13:52:32 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:30.146 13:52:32 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:30.146 13:52:32 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:30.146 13:52:32 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:30.146 13:52:32 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:30.146 13:52:32 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:30.146 13:52:32 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:30.146 13:52:32 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:30.405 [2024-07-11 13:52:32.691546] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.405 [2024-07-11 13:52:32.707556] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.405 [2024-07-11 13:52:32.707735] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.405 malloc0 00:22:30.405 13:52:32 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:30.405 13:52:32 -- fips/fips.sh@148 -- # bdevperf_pid=1648498 00:22:30.405 13:52:32 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:30.405 13:52:32 -- fips/fips.sh@149 -- # waitforlisten 1648498 /var/tmp/bdevperf.sock 00:22:30.405 13:52:32 -- common/autotest_common.sh@819 -- # '[' -z 1648498 ']' 00:22:30.405 13:52:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.405 13:52:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:30.405 13:52:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.405 13:52:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:30.405 13:52:32 -- common/autotest_common.sh@10 -- # set +x 00:22:30.405 [2024-07-11 13:52:32.814104] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:30.405 [2024-07-11 13:52:32.814155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648498 ] 00:22:30.405 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.664 [2024-07-11 13:52:32.865343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.664 [2024-07-11 13:52:32.902956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.233 13:52:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:31.233 13:52:33 -- common/autotest_common.sh@852 -- # return 0 00:22:31.233 13:52:33 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:31.492 [2024-07-11 13:52:33.743405] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.492 TLSTESTn1 00:22:31.492 13:52:33 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:31.492 Running I/O for 10 seconds... 00:22:43.708 00:22:43.708 Latency(us) 00:22:43.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.708 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:43.708 Verification LBA range: start 0x0 length 0x2000 00:22:43.708 TLSTESTn1 : 10.01 5354.71 20.92 0.00 0.00 23880.41 3704.21 41943.04 00:22:43.708 =================================================================================================================== 00:22:43.708 Total : 5354.71 20.92 0.00 0.00 23880.41 3704.21 41943.04 00:22:43.708 0 00:22:43.708 13:52:43 -- fips/fips.sh@1 -- # cleanup 00:22:43.708 13:52:43 -- fips/fips.sh@15 -- # process_shm --id 0 00:22:43.708 13:52:43 -- common/autotest_common.sh@796 -- # type=--id 00:22:43.708 13:52:43 -- common/autotest_common.sh@797 -- # id=0 00:22:43.708 13:52:43 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:22:43.708 13:52:43 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:43.708 13:52:43 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:22:43.708 13:52:43 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:22:43.708 13:52:43 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:22:43.708 13:52:43 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:43.708 nvmf_trace.0 00:22:43.708 13:52:44 -- common/autotest_common.sh@811 -- # return 0 00:22:43.708 13:52:44 -- fips/fips.sh@16 -- # killprocess 1648498 00:22:43.708 13:52:44 -- common/autotest_common.sh@926 -- # '[' -z 1648498 ']' 00:22:43.708 13:52:44 -- common/autotest_common.sh@930 -- # kill -0 1648498 00:22:43.708 13:52:44 -- common/autotest_common.sh@931 -- # uname 00:22:43.708 13:52:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:43.708 13:52:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1648498 00:22:43.708 13:52:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:22:43.708 13:52:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:22:43.708 13:52:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1648498' 00:22:43.708 killing process with pid 1648498 00:22:43.708 13:52:44 -- common/autotest_common.sh@945 -- # kill 1648498 00:22:43.708 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.708 00:22:43.708 Latency(us) 00:22:43.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.708 =================================================================================================================== 00:22:43.708 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.708 13:52:44 -- common/autotest_common.sh@950 -- # wait 1648498 00:22:43.708 13:52:44 -- fips/fips.sh@17 -- # nvmftestfini 00:22:43.708 13:52:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:43.708 13:52:44 -- nvmf/common.sh@116 -- # sync 00:22:43.708 13:52:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:43.708 13:52:44 -- nvmf/common.sh@119 -- # set +e 00:22:43.708 13:52:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:43.708 13:52:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:43.708 rmmod nvme_tcp 00:22:43.708 rmmod nvme_fabrics 00:22:43.708 rmmod nvme_keyring 00:22:43.708 13:52:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:43.708 13:52:44 -- nvmf/common.sh@123 -- # set -e 00:22:43.708 13:52:44 -- nvmf/common.sh@124 -- # return 0 00:22:43.708 13:52:44 -- nvmf/common.sh@477 -- # '[' -n 1648382 ']' 00:22:43.708 13:52:44 -- nvmf/common.sh@478 -- # killprocess 1648382 00:22:43.708 13:52:44 -- common/autotest_common.sh@926 -- # '[' -z 1648382 ']' 00:22:43.708 13:52:44 -- common/autotest_common.sh@930 -- # kill -0 1648382 00:22:43.708 13:52:44 -- common/autotest_common.sh@931 -- # uname 00:22:43.708 13:52:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:43.708 13:52:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1648382 00:22:43.708 13:52:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:43.708 13:52:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:43.708 13:52:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1648382' 00:22:43.708 killing process with pid 1648382 00:22:43.708 13:52:44 -- common/autotest_common.sh@945 -- # kill 1648382 00:22:43.708 13:52:44 -- common/autotest_common.sh@950 -- # wait 1648382 00:22:43.708 13:52:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:43.708 13:52:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:43.708 13:52:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:43.708 13:52:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.708 13:52:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:43.708 13:52:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.708 13:52:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.708 13:52:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.278 13:52:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:44.278 13:52:46 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:44.278 00:22:44.278 real 0m20.133s 00:22:44.278 user 0m21.234s 00:22:44.278 sys 0m9.558s 00:22:44.278 13:52:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.278 13:52:46 -- common/autotest_common.sh@10 -- # set +x 00:22:44.278 ************************************ 00:22:44.278 END TEST nvmf_fips 00:22:44.278 ************************************ 00:22:44.278 13:52:46 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:22:44.278 13:52:46 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:44.278 13:52:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:44.278 13:52:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:44.278 13:52:46 -- common/autotest_common.sh@10 -- # set +x 00:22:44.278 ************************************ 00:22:44.278 START TEST nvmf_fuzz 00:22:44.278 ************************************ 00:22:44.278 13:52:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:44.278 * Looking for test storage... 00:22:44.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:44.278 13:52:46 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.278 13:52:46 -- nvmf/common.sh@7 -- # uname -s 00:22:44.278 13:52:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.278 13:52:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.278 13:52:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.278 13:52:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.278 13:52:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.278 13:52:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.278 13:52:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.278 13:52:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.278 13:52:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.278 13:52:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.538 13:52:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:44.538 13:52:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:44.538 13:52:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.538 13:52:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.538 13:52:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.538 13:52:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.538 13:52:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.538 13:52:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.538 13:52:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.538 13:52:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.538 13:52:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.538 13:52:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.538 13:52:46 -- paths/export.sh@5 -- # export PATH 00:22:44.538 13:52:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.538 13:52:46 -- nvmf/common.sh@46 -- # : 0 00:22:44.538 13:52:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:44.538 13:52:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:44.538 13:52:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:44.539 13:52:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.539 13:52:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.539 13:52:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:44.539 13:52:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:44.539 13:52:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:44.539 13:52:46 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:44.539 13:52:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:44.539 13:52:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.539 13:52:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:44.539 13:52:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:44.539 13:52:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:44.539 13:52:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.539 13:52:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.539 13:52:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.539 13:52:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:44.539 13:52:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:44.539 13:52:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:44.539 13:52:46 -- common/autotest_common.sh@10 -- # set +x 00:22:49.843 13:52:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:49.843 13:52:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:49.843 13:52:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:49.843 13:52:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:49.843 13:52:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:49.843 13:52:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:49.843 13:52:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:49.843 13:52:51 -- nvmf/common.sh@294 -- # net_devs=() 00:22:49.843 13:52:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:49.843 13:52:51 -- nvmf/common.sh@295 -- # e810=() 00:22:49.843 13:52:51 -- nvmf/common.sh@295 -- # local -ga e810 00:22:49.843 13:52:51 -- nvmf/common.sh@296 -- # x722=() 00:22:49.843 13:52:51 -- nvmf/common.sh@296 -- # local -ga x722 00:22:49.843 13:52:51 -- nvmf/common.sh@297 -- # mlx=() 00:22:49.843 13:52:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:49.843 13:52:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.843 13:52:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.843 13:52:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.843 13:52:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.843 13:52:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.843 13:52:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.843 13:52:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.843 13:52:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.843 13:52:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.843 13:52:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.843 13:52:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.843 13:52:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:49.843 13:52:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:49.843 13:52:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:49.843 13:52:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:49.843 13:52:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:49.843 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:49.843 13:52:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:49.843 13:52:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:49.843 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:49.843 13:52:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:49.843 13:52:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:49.843 13:52:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.843 13:52:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:49.843 13:52:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.843 13:52:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:49.843 Found net devices under 0000:86:00.0: cvl_0_0 00:22:49.843 13:52:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.843 13:52:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:49.843 13:52:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.843 13:52:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:49.843 13:52:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.843 13:52:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:49.843 Found net devices under 0000:86:00.1: cvl_0_1 00:22:49.843 13:52:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.843 13:52:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:49.843 13:52:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:49.843 13:52:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:49.843 13:52:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:49.843 13:52:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.843 13:52:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.843 13:52:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.843 13:52:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:49.843 13:52:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.843 13:52:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.843 13:52:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:49.843 13:52:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.843 13:52:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.843 13:52:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:49.843 13:52:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:49.844 13:52:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.844 13:52:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.844 13:52:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.844 13:52:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.844 13:52:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:49.844 13:52:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.844 13:52:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.844 13:52:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.844 13:52:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:49.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:22:49.844 00:22:49.844 --- 10.0.0.2 ping statistics --- 00:22:49.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.844 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:49.844 13:52:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:22:49.844 00:22:49.844 --- 10.0.0.1 ping statistics --- 00:22:49.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.844 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:22:49.844 13:52:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.844 13:52:52 -- nvmf/common.sh@410 -- # return 0 00:22:49.844 13:52:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:49.844 13:52:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.844 13:52:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:49.844 13:52:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:49.844 13:52:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.844 13:52:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:49.844 13:52:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:49.844 13:52:52 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1653814 00:22:49.844 13:52:52 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:49.844 13:52:52 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:49.844 13:52:52 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1653814 00:22:49.844 13:52:52 -- common/autotest_common.sh@819 -- # '[' -z 1653814 ']' 00:22:49.844 13:52:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.844 13:52:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:49.844 13:52:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.844 13:52:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:49.844 13:52:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.785 13:52:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:50.785 13:52:52 -- common/autotest_common.sh@852 -- # return 0 00:22:50.785 13:52:52 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.785 13:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.785 13:52:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.785 13:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.785 13:52:52 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:50.785 13:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.785 13:52:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.785 Malloc0 00:22:50.785 13:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.785 13:52:52 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.785 13:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.785 13:52:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.785 13:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.785 13:52:52 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:50.785 13:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.785 13:52:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.785 13:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.785 13:52:52 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.785 13:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.785 13:52:52 -- common/autotest_common.sh@10 -- # set +x 00:22:50.785 13:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.785 13:52:52 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:50.785 13:52:52 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:22.861 Fuzzing completed. Shutting down the fuzz application 00:23:22.861 00:23:22.861 Dumping successful admin opcodes: 00:23:22.861 8, 9, 10, 24, 00:23:22.861 Dumping successful io opcodes: 00:23:22.861 0, 9, 00:23:22.861 NS: 0x200003aeff00 I/O qp, Total commands completed: 888131, total successful commands: 5166, random_seed: 2128676096 00:23:22.861 NS: 0x200003aeff00 admin qp, Total commands completed: 84617, total successful commands: 673, random_seed: 1403809792 00:23:22.861 13:53:23 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:22.861 Fuzzing completed. Shutting down the fuzz application 00:23:22.861 00:23:22.861 Dumping successful admin opcodes: 00:23:22.861 24, 00:23:22.861 Dumping successful io opcodes: 00:23:22.861 00:23:22.861 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 360231876 00:23:22.861 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 360304698 00:23:22.861 13:53:24 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:22.861 13:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:22.861 13:53:24 -- common/autotest_common.sh@10 -- # set +x 00:23:22.861 13:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:22.861 13:53:24 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:22.861 13:53:24 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:22.861 13:53:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:22.861 13:53:24 -- nvmf/common.sh@116 -- # sync 00:23:22.861 13:53:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:22.861 13:53:24 -- nvmf/common.sh@119 -- # set +e 00:23:22.861 13:53:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:22.861 13:53:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:22.861 rmmod nvme_tcp 00:23:22.861 rmmod nvme_fabrics 00:23:22.861 rmmod nvme_keyring 00:23:22.861 13:53:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:22.861 13:53:24 -- nvmf/common.sh@123 -- # set -e 00:23:22.861 13:53:24 -- nvmf/common.sh@124 -- # return 0 00:23:22.861 13:53:24 -- nvmf/common.sh@477 -- # '[' -n 1653814 ']' 00:23:22.861 13:53:24 -- nvmf/common.sh@478 -- # killprocess 1653814 00:23:22.861 13:53:24 -- common/autotest_common.sh@926 -- # '[' -z 1653814 ']' 00:23:22.861 13:53:24 -- common/autotest_common.sh@930 -- # kill -0 1653814 00:23:22.861 13:53:24 -- common/autotest_common.sh@931 -- # uname 00:23:22.861 13:53:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:22.861 13:53:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1653814 00:23:22.861 13:53:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:22.862 13:53:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:22.862 13:53:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1653814' 00:23:22.862 killing process with pid 1653814 00:23:22.862 13:53:24 -- common/autotest_common.sh@945 -- # kill 1653814 00:23:22.862 13:53:24 -- common/autotest_common.sh@950 -- # wait 1653814 00:23:22.862 13:53:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:22.862 13:53:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:22.862 13:53:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:22.862 13:53:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.862 13:53:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:22.862 13:53:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.862 13:53:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.862 13:53:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.764 13:53:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:24.764 13:53:26 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:24.764 00:23:24.764 real 0m40.261s 00:23:24.764 user 0m52.980s 00:23:24.764 sys 0m16.862s 00:23:24.764 13:53:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:24.764 13:53:26 -- common/autotest_common.sh@10 -- # set +x 00:23:24.764 ************************************ 00:23:24.764 END TEST nvmf_fuzz 00:23:24.764 ************************************ 00:23:24.764 13:53:26 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:24.764 13:53:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:24.764 13:53:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:24.764 13:53:26 -- common/autotest_common.sh@10 -- # set +x 00:23:24.764 ************************************ 00:23:24.764 START TEST nvmf_multiconnection 00:23:24.764 ************************************ 00:23:24.764 13:53:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:24.764 * Looking for test storage... 00:23:24.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:24.764 13:53:27 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.764 13:53:27 -- nvmf/common.sh@7 -- # uname -s 00:23:24.764 13:53:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.764 13:53:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.764 13:53:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.764 13:53:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.764 13:53:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.764 13:53:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.764 13:53:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.764 13:53:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.764 13:53:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.764 13:53:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.764 13:53:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:24.764 13:53:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:24.764 13:53:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.764 13:53:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.764 13:53:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.764 13:53:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.764 13:53:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.764 13:53:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.764 13:53:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.764 13:53:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.765 13:53:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.765 13:53:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.765 13:53:27 -- paths/export.sh@5 -- # export PATH 00:23:24.765 13:53:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.765 13:53:27 -- nvmf/common.sh@46 -- # : 0 00:23:24.765 13:53:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:24.765 13:53:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:24.765 13:53:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:24.765 13:53:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.765 13:53:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.765 13:53:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:24.765 13:53:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:24.765 13:53:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:24.765 13:53:27 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:24.765 13:53:27 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:24.765 13:53:27 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:24.765 13:53:27 -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:24.765 13:53:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:24.765 13:53:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.765 13:53:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:24.765 13:53:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:24.765 13:53:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:24.765 13:53:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.765 13:53:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.765 13:53:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.765 13:53:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:24.765 13:53:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:24.765 13:53:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:24.765 13:53:27 -- common/autotest_common.sh@10 -- # set +x 00:23:30.039 13:53:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:30.039 13:53:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:30.039 13:53:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:30.039 13:53:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:30.039 13:53:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:30.039 13:53:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:30.039 13:53:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:30.039 13:53:32 -- nvmf/common.sh@294 -- # net_devs=() 00:23:30.039 13:53:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:30.039 13:53:32 -- nvmf/common.sh@295 -- # e810=() 00:23:30.039 13:53:32 -- nvmf/common.sh@295 -- # local -ga e810 00:23:30.039 13:53:32 -- nvmf/common.sh@296 -- # x722=() 00:23:30.039 13:53:32 -- nvmf/common.sh@296 -- # local -ga x722 00:23:30.039 13:53:32 -- nvmf/common.sh@297 -- # mlx=() 00:23:30.039 13:53:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:30.039 13:53:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.039 13:53:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.039 13:53:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.039 13:53:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.039 13:53:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.039 13:53:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.039 13:53:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.039 13:53:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.039 13:53:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.039 13:53:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.039 13:53:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.039 13:53:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:30.039 13:53:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:30.039 13:53:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:30.039 13:53:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:30.039 13:53:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:30.039 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:30.039 13:53:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:30.039 13:53:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:30.039 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:30.039 13:53:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:30.039 13:53:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:30.039 13:53:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.039 13:53:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:30.039 13:53:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.039 13:53:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:30.039 Found net devices under 0000:86:00.0: cvl_0_0 00:23:30.039 13:53:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.039 13:53:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:30.039 13:53:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.039 13:53:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:30.039 13:53:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.039 13:53:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:30.039 Found net devices under 0000:86:00.1: cvl_0_1 00:23:30.039 13:53:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.039 13:53:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:30.039 13:53:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:30.039 13:53:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:30.039 13:53:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.039 13:53:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.039 13:53:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.039 13:53:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:30.039 13:53:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.039 13:53:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.039 13:53:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:30.039 13:53:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.039 13:53:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.039 13:53:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:30.039 13:53:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:30.039 13:53:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.039 13:53:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.039 13:53:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.039 13:53:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.039 13:53:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:30.039 13:53:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.039 13:53:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.039 13:53:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.039 13:53:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:30.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:23:30.039 00:23:30.039 --- 10.0.0.2 ping statistics --- 00:23:30.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.039 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:23:30.039 13:53:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:23:30.039 00:23:30.039 --- 10.0.0.1 ping statistics --- 00:23:30.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.039 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:23:30.039 13:53:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.039 13:53:32 -- nvmf/common.sh@410 -- # return 0 00:23:30.039 13:53:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:30.039 13:53:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.039 13:53:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:30.039 13:53:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.039 13:53:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:30.039 13:53:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:30.298 13:53:32 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:23:30.298 13:53:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:30.298 13:53:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:30.298 13:53:32 -- common/autotest_common.sh@10 -- # set +x 00:23:30.298 13:53:32 -- nvmf/common.sh@469 -- # nvmfpid=1662690 00:23:30.298 13:53:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:30.298 13:53:32 -- nvmf/common.sh@470 -- # waitforlisten 1662690 00:23:30.298 13:53:32 -- common/autotest_common.sh@819 -- # '[' -z 1662690 ']' 00:23:30.298 13:53:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.298 13:53:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:30.298 13:53:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.298 13:53:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:30.298 13:53:32 -- common/autotest_common.sh@10 -- # set +x 00:23:30.298 [2024-07-11 13:53:32.570965] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:30.298 [2024-07-11 13:53:32.571008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.298 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.298 [2024-07-11 13:53:32.631607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.298 [2024-07-11 13:53:32.672343] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:30.298 [2024-07-11 13:53:32.672457] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.298 [2024-07-11 13:53:32.672466] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.298 [2024-07-11 13:53:32.672473] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.298 [2024-07-11 13:53:32.672515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.298 [2024-07-11 13:53:32.672541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.298 [2024-07-11 13:53:32.672626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.298 [2024-07-11 13:53:32.672627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.236 13:53:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:31.236 13:53:33 -- common/autotest_common.sh@852 -- # return 0 00:23:31.236 13:53:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:31.236 13:53:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.236 13:53:33 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 [2024-07-11 13:53:33.419582] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@21 -- # seq 1 11 00:23:31.236 13:53:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.236 13:53:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 Malloc1 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 [2024-07-11 13:53:33.475520] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.236 13:53:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 Malloc2 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.236 13:53:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 Malloc3 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.236 13:53:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 Malloc4 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.236 13:53:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 Malloc5 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.236 13:53:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 Malloc6 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.236 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.236 13:53:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.236 13:53:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:23:31.236 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.236 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 Malloc7 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.496 13:53:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 Malloc8 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.496 13:53:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 Malloc9 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.496 13:53:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 Malloc10 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.496 13:53:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 Malloc11 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.496 13:53:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:23:31.496 13:53:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.496 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:23:31.496 13:53:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.497 13:53:33 -- target/multiconnection.sh@28 -- # seq 1 11 00:23:31.497 13:53:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.497 13:53:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:32.875 13:53:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:23:32.875 13:53:35 -- common/autotest_common.sh@1177 -- # local i=0 00:23:32.875 13:53:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:32.875 13:53:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:32.875 13:53:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:34.831 13:53:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:34.831 13:53:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:34.831 13:53:37 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:23:34.831 13:53:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:34.831 13:53:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:34.831 13:53:37 -- common/autotest_common.sh@1187 -- # return 0 00:23:34.831 13:53:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:34.831 13:53:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:23:35.765 13:53:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:23:35.765 13:53:38 -- common/autotest_common.sh@1177 -- # local i=0 00:23:35.766 13:53:38 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:35.766 13:53:38 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:35.766 13:53:38 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:37.718 13:53:40 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:37.977 13:53:40 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:37.977 13:53:40 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:23:37.977 13:53:40 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:37.977 13:53:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:37.977 13:53:40 -- common/autotest_common.sh@1187 -- # return 0 00:23:37.977 13:53:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:37.977 13:53:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:38.914 13:53:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:38.915 13:53:41 -- common/autotest_common.sh@1177 -- # local i=0 00:23:38.915 13:53:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:38.915 13:53:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:38.915 13:53:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:41.451 13:53:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:41.451 13:53:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:41.451 13:53:43 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:23:41.451 13:53:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:41.451 13:53:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:41.451 13:53:43 -- common/autotest_common.sh@1187 -- # return 0 00:23:41.451 13:53:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:41.451 13:53:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:42.386 13:53:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:42.386 13:53:44 -- common/autotest_common.sh@1177 -- # local i=0 00:23:42.386 13:53:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:42.386 13:53:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:42.386 13:53:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:44.290 13:53:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:44.290 13:53:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:44.290 13:53:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:23:44.290 13:53:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:44.290 13:53:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:44.290 13:53:46 -- common/autotest_common.sh@1187 -- # return 0 00:23:44.290 13:53:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:44.290 13:53:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:45.669 13:53:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:45.669 13:53:47 -- common/autotest_common.sh@1177 -- # local i=0 00:23:45.669 13:53:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:45.669 13:53:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:45.669 13:53:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:47.575 13:53:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:47.575 13:53:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:47.575 13:53:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:23:47.575 13:53:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:47.575 13:53:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:47.575 13:53:49 -- common/autotest_common.sh@1187 -- # return 0 00:23:47.575 13:53:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:47.575 13:53:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:23:48.953 13:53:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:48.953 13:53:51 -- common/autotest_common.sh@1177 -- # local i=0 00:23:48.953 13:53:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:48.953 13:53:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:48.953 13:53:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:50.859 13:53:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:50.859 13:53:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:50.859 13:53:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:23:50.859 13:53:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:50.859 13:53:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:50.859 13:53:53 -- common/autotest_common.sh@1187 -- # return 0 00:23:50.859 13:53:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:50.859 13:53:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:52.235 13:53:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:52.235 13:53:54 -- common/autotest_common.sh@1177 -- # local i=0 00:23:52.235 13:53:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:52.235 13:53:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:52.235 13:53:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:54.173 13:53:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:54.173 13:53:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:54.173 13:53:56 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:23:54.173 13:53:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:54.173 13:53:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:54.173 13:53:56 -- common/autotest_common.sh@1187 -- # return 0 00:23:54.174 13:53:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:54.174 13:53:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:23:55.551 13:53:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:55.551 13:53:57 -- common/autotest_common.sh@1177 -- # local i=0 00:23:55.551 13:53:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:55.551 13:53:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:55.551 13:53:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:57.454 13:53:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:57.454 13:53:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:57.454 13:53:59 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:23:57.454 13:53:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:57.454 13:53:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:57.454 13:53:59 -- common/autotest_common.sh@1187 -- # return 0 00:23:57.454 13:53:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.454 13:53:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:58.835 13:54:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:58.835 13:54:01 -- common/autotest_common.sh@1177 -- # local i=0 00:23:58.835 13:54:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:58.835 13:54:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:58.835 13:54:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:01.371 13:54:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:01.371 13:54:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:01.371 13:54:03 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:24:01.371 13:54:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:01.371 13:54:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:01.371 13:54:03 -- common/autotest_common.sh@1187 -- # return 0 00:24:01.371 13:54:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.371 13:54:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:02.308 13:54:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:02.308 13:54:04 -- common/autotest_common.sh@1177 -- # local i=0 00:24:02.308 13:54:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:02.308 13:54:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:02.308 13:54:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:04.214 13:54:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:04.214 13:54:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:04.214 13:54:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:24:04.473 13:54:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:04.473 13:54:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:04.473 13:54:06 -- common/autotest_common.sh@1187 -- # return 0 00:24:04.473 13:54:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:04.473 13:54:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:05.852 13:54:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:05.852 13:54:08 -- common/autotest_common.sh@1177 -- # local i=0 00:24:05.852 13:54:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:05.852 13:54:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:05.852 13:54:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:07.757 13:54:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:07.757 13:54:10 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:24:07.757 13:54:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:07.757 13:54:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:07.757 13:54:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:07.757 13:54:10 -- common/autotest_common.sh@1187 -- # return 0 00:24:07.757 13:54:10 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:07.757 [global] 00:24:07.757 thread=1 00:24:07.757 invalidate=1 00:24:07.757 rw=read 00:24:07.757 time_based=1 00:24:07.757 runtime=10 00:24:07.757 ioengine=libaio 00:24:07.757 direct=1 00:24:07.757 bs=262144 00:24:07.757 iodepth=64 00:24:07.757 norandommap=1 00:24:07.757 numjobs=1 00:24:07.757 00:24:07.757 [job0] 00:24:07.757 filename=/dev/nvme0n1 00:24:07.757 [job1] 00:24:07.757 filename=/dev/nvme10n1 00:24:07.757 [job2] 00:24:07.757 filename=/dev/nvme1n1 00:24:07.757 [job3] 00:24:07.757 filename=/dev/nvme2n1 00:24:07.757 [job4] 00:24:07.757 filename=/dev/nvme3n1 00:24:07.757 [job5] 00:24:07.757 filename=/dev/nvme4n1 00:24:07.757 [job6] 00:24:07.757 filename=/dev/nvme5n1 00:24:07.757 [job7] 00:24:07.757 filename=/dev/nvme6n1 00:24:07.757 [job8] 00:24:07.757 filename=/dev/nvme7n1 00:24:07.757 [job9] 00:24:07.757 filename=/dev/nvme8n1 00:24:07.757 [job10] 00:24:07.757 filename=/dev/nvme9n1 00:24:08.016 Could not set queue depth (nvme0n1) 00:24:08.016 Could not set queue depth (nvme10n1) 00:24:08.016 Could not set queue depth (nvme1n1) 00:24:08.016 Could not set queue depth (nvme2n1) 00:24:08.016 Could not set queue depth (nvme3n1) 00:24:08.016 Could not set queue depth (nvme4n1) 00:24:08.016 Could not set queue depth (nvme5n1) 00:24:08.016 Could not set queue depth (nvme6n1) 00:24:08.016 Could not set queue depth (nvme7n1) 00:24:08.016 Could not set queue depth (nvme8n1) 00:24:08.016 Could not set queue depth (nvme9n1) 00:24:08.275 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.275 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.275 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.275 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.275 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.275 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.275 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.275 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.275 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.275 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.275 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:08.275 fio-3.35 00:24:08.275 Starting 11 threads 00:24:20.522 00:24:20.522 job0: (groupid=0, jobs=1): err= 0: pid=1669993: Thu Jul 11 13:54:21 2024 00:24:20.522 read: IOPS=810, BW=203MiB/s (213MB/s)(2033MiB/10026msec) 00:24:20.522 slat (usec): min=9, max=104121, avg=707.66, stdev=3356.47 00:24:20.522 clat (usec): min=1058, max=217382, avg=78110.49, stdev=38112.52 00:24:20.522 lat (usec): min=1099, max=248001, avg=78818.15, stdev=38595.45 00:24:20.522 clat percentiles (msec): 00:24:20.522 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 26], 20.00th=[ 41], 00:24:20.522 | 30.00th=[ 56], 40.00th=[ 68], 50.00th=[ 82], 60.00th=[ 92], 00:24:20.522 | 70.00th=[ 100], 80.00th=[ 110], 90.00th=[ 125], 95.00th=[ 140], 00:24:20.522 | 99.00th=[ 165], 99.50th=[ 190], 99.90th=[ 213], 99.95th=[ 215], 00:24:20.522 | 99.99th=[ 218] 00:24:20.522 bw ( KiB/s): min=143360, max=288256, per=8.84%, avg=206540.80, stdev=37462.24, samples=20 00:24:20.522 iops : min= 560, max= 1126, avg=806.70, stdev=146.25, samples=20 00:24:20.522 lat (msec) : 2=0.22%, 4=0.23%, 10=1.19%, 20=4.97%, 50=20.49% 00:24:20.522 lat (msec) : 100=43.39%, 250=29.50% 00:24:20.522 cpu : usr=0.23%, sys=2.73%, ctx=2295, majf=0, minf=3347 00:24:20.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.522 issued rwts: total=8131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.522 job1: (groupid=0, jobs=1): err= 0: pid=1669994: Thu Jul 11 13:54:21 2024 00:24:20.522 read: IOPS=807, BW=202MiB/s (212MB/s)(2027MiB/10041msec) 00:24:20.522 slat (usec): min=9, max=104365, avg=933.14, stdev=3367.69 00:24:20.522 clat (usec): min=1150, max=226338, avg=78242.48, stdev=34767.42 00:24:20.522 lat (usec): min=1182, max=238558, avg=79175.62, stdev=35186.70 00:24:20.522 clat percentiles (msec): 00:24:20.522 | 1.00th=[ 10], 5.00th=[ 27], 10.00th=[ 39], 20.00th=[ 50], 00:24:20.522 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 74], 60.00th=[ 88], 00:24:20.522 | 70.00th=[ 99], 80.00th=[ 110], 90.00th=[ 124], 95.00th=[ 142], 00:24:20.522 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 186], 99.95th=[ 186], 00:24:20.522 | 99.99th=[ 226] 00:24:20.522 bw ( KiB/s): min=95744, max=316416, per=8.82%, avg=205952.00, stdev=64749.57, samples=20 00:24:20.522 iops : min= 374, max= 1236, avg=804.50, stdev=252.93, samples=20 00:24:20.522 lat (msec) : 2=0.16%, 4=0.27%, 10=0.64%, 20=1.07%, 50=19.43% 00:24:20.522 lat (msec) : 100=49.96%, 250=28.47% 00:24:20.522 cpu : usr=0.32%, sys=3.14%, ctx=1978, majf=0, minf=4097 00:24:20.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.522 issued rwts: total=8108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.522 job2: (groupid=0, jobs=1): err= 0: pid=1669995: Thu Jul 11 13:54:21 2024 00:24:20.522 read: IOPS=945, BW=236MiB/s (248MB/s)(2384MiB/10087msec) 00:24:20.522 slat (usec): min=8, max=107006, avg=688.57, stdev=3494.14 00:24:20.522 clat (usec): min=910, max=271430, avg=66909.05, stdev=41321.01 00:24:20.522 lat (usec): min=952, max=271461, avg=67597.62, stdev=41701.97 00:24:20.522 clat percentiles (msec): 00:24:20.522 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 23], 20.00th=[ 32], 00:24:20.522 | 30.00th=[ 41], 40.00th=[ 48], 50.00th=[ 57], 60.00th=[ 69], 00:24:20.522 | 70.00th=[ 84], 80.00th=[ 101], 90.00th=[ 124], 95.00th=[ 150], 00:24:20.522 | 99.00th=[ 182], 99.50th=[ 205], 99.90th=[ 247], 99.95th=[ 247], 00:24:20.522 | 99.99th=[ 271] 00:24:20.522 bw ( KiB/s): min=100352, max=368640, per=10.38%, avg=242511.85, stdev=79058.45, samples=20 00:24:20.522 iops : min= 392, max= 1440, avg=947.30, stdev=308.82, samples=20 00:24:20.522 lat (usec) : 1000=0.01% 00:24:20.522 lat (msec) : 2=0.07%, 4=0.25%, 10=1.43%, 20=5.95%, 50=34.80% 00:24:20.522 lat (msec) : 100=37.80%, 250=19.66%, 500=0.03% 00:24:20.522 cpu : usr=0.33%, sys=3.36%, ctx=2379, majf=0, minf=4097 00:24:20.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.522 issued rwts: total=9537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.522 job3: (groupid=0, jobs=1): err= 0: pid=1669996: Thu Jul 11 13:54:21 2024 00:24:20.522 read: IOPS=780, BW=195MiB/s (205MB/s)(1969MiB/10092msec) 00:24:20.522 slat (usec): min=8, max=91522, avg=677.68, stdev=3199.62 00:24:20.522 clat (usec): min=1823, max=238301, avg=81237.63, stdev=38900.72 00:24:20.522 lat (usec): min=1858, max=238343, avg=81915.31, stdev=39250.82 00:24:20.522 clat percentiles (msec): 00:24:20.522 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 30], 20.00th=[ 48], 00:24:20.522 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 81], 60.00th=[ 92], 00:24:20.522 | 70.00th=[ 102], 80.00th=[ 112], 90.00th=[ 133], 95.00th=[ 148], 00:24:20.522 | 99.00th=[ 167], 99.50th=[ 205], 99.90th=[ 228], 99.95th=[ 228], 00:24:20.522 | 99.99th=[ 239] 00:24:20.522 bw ( KiB/s): min=105472, max=331264, per=8.56%, avg=199980.20, stdev=56374.29, samples=20 00:24:20.522 iops : min= 412, max= 1294, avg=781.15, stdev=220.22, samples=20 00:24:20.522 lat (msec) : 2=0.01%, 4=0.66%, 10=3.15%, 20=2.57%, 50=15.46% 00:24:20.522 lat (msec) : 100=46.96%, 250=31.19% 00:24:20.522 cpu : usr=0.31%, sys=2.80%, ctx=2304, majf=0, minf=4097 00:24:20.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.522 issued rwts: total=7874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.522 job4: (groupid=0, jobs=1): err= 0: pid=1669997: Thu Jul 11 13:54:21 2024 00:24:20.522 read: IOPS=937, BW=234MiB/s (246MB/s)(2350MiB/10026msec) 00:24:20.522 slat (usec): min=7, max=75547, avg=808.62, stdev=3094.71 00:24:20.522 clat (msec): min=2, max=228, avg=67.39, stdev=38.48 00:24:20.522 lat (msec): min=2, max=240, avg=68.20, stdev=38.98 00:24:20.522 clat percentiles (msec): 00:24:20.522 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 26], 20.00th=[ 30], 00:24:20.522 | 30.00th=[ 40], 40.00th=[ 54], 50.00th=[ 63], 60.00th=[ 73], 00:24:20.522 | 70.00th=[ 86], 80.00th=[ 101], 90.00th=[ 120], 95.00th=[ 133], 00:24:20.522 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 211], 99.95th=[ 224], 00:24:20.522 | 99.99th=[ 228] 00:24:20.522 bw ( KiB/s): min=146432, max=401920, per=10.23%, avg=239060.80, stdev=81835.75, samples=20 00:24:20.522 iops : min= 572, max= 1570, avg=933.80, stdev=319.63, samples=20 00:24:20.522 lat (msec) : 4=0.34%, 10=2.46%, 20=4.35%, 50=29.95%, 100=43.09% 00:24:20.522 lat (msec) : 250=19.82% 00:24:20.522 cpu : usr=0.39%, sys=3.52%, ctx=2306, majf=0, minf=4097 00:24:20.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.522 issued rwts: total=9400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.522 job5: (groupid=0, jobs=1): err= 0: pid=1670002: Thu Jul 11 13:54:21 2024 00:24:20.522 read: IOPS=709, BW=177MiB/s (186MB/s)(1790MiB/10091msec) 00:24:20.522 slat (usec): min=9, max=80401, avg=898.26, stdev=3719.70 00:24:20.522 clat (usec): min=1110, max=280937, avg=89201.93, stdev=38471.96 00:24:20.522 lat (usec): min=1141, max=280983, avg=90100.19, stdev=39011.06 00:24:20.522 clat percentiles (msec): 00:24:20.522 | 1.00th=[ 8], 5.00th=[ 22], 10.00th=[ 34], 20.00th=[ 61], 00:24:20.522 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 89], 60.00th=[ 96], 00:24:20.522 | 70.00th=[ 105], 80.00th=[ 116], 90.00th=[ 138], 95.00th=[ 157], 00:24:20.522 | 99.00th=[ 194], 99.50th=[ 209], 99.90th=[ 215], 99.95th=[ 253], 00:24:20.522 | 99.99th=[ 279] 00:24:20.522 bw ( KiB/s): min=102912, max=274432, per=7.78%, avg=181680.10, stdev=44736.08, samples=20 00:24:20.522 iops : min= 402, max= 1072, avg=709.65, stdev=174.71, samples=20 00:24:20.522 lat (msec) : 2=0.04%, 4=0.34%, 10=1.49%, 20=2.36%, 50=10.95% 00:24:20.522 lat (msec) : 100=50.28%, 250=34.47%, 500=0.07% 00:24:20.522 cpu : usr=0.23%, sys=2.59%, ctx=2043, majf=0, minf=4097 00:24:20.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.522 issued rwts: total=7160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.522 job6: (groupid=0, jobs=1): err= 0: pid=1670003: Thu Jul 11 13:54:21 2024 00:24:20.522 read: IOPS=817, BW=204MiB/s (214MB/s)(2069MiB/10123msec) 00:24:20.522 slat (usec): min=7, max=66535, avg=855.42, stdev=3179.58 00:24:20.522 clat (usec): min=1077, max=260137, avg=77305.11, stdev=39663.32 00:24:20.522 lat (usec): min=1112, max=260163, avg=78160.53, stdev=40102.67 00:24:20.522 clat percentiles (msec): 00:24:20.522 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 22], 20.00th=[ 44], 00:24:20.522 | 30.00th=[ 53], 40.00th=[ 67], 50.00th=[ 79], 60.00th=[ 88], 00:24:20.522 | 70.00th=[ 100], 80.00th=[ 111], 90.00th=[ 128], 95.00th=[ 144], 00:24:20.522 | 99.00th=[ 171], 99.50th=[ 188], 99.90th=[ 241], 99.95th=[ 243], 00:24:20.522 | 99.99th=[ 262] 00:24:20.522 bw ( KiB/s): min=113152, max=355328, per=9.00%, avg=210266.00, stdev=71003.06, samples=20 00:24:20.522 iops : min= 442, max= 1388, avg=821.35, stdev=277.36, samples=20 00:24:20.522 lat (msec) : 2=0.16%, 4=0.81%, 10=2.08%, 20=6.19%, 50=18.57% 00:24:20.522 lat (msec) : 100=43.46%, 250=28.72%, 500=0.01% 00:24:20.522 cpu : usr=0.26%, sys=2.98%, ctx=2132, majf=0, minf=4097 00:24:20.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.522 issued rwts: total=8276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.522 job7: (groupid=0, jobs=1): err= 0: pid=1670004: Thu Jul 11 13:54:21 2024 00:24:20.522 read: IOPS=757, BW=189MiB/s (199MB/s)(1911MiB/10086msec) 00:24:20.522 slat (usec): min=8, max=115216, avg=883.76, stdev=3601.08 00:24:20.522 clat (usec): min=1293, max=260582, avg=83469.90, stdev=41797.68 00:24:20.522 lat (usec): min=1324, max=260618, avg=84353.65, stdev=42302.08 00:24:20.522 clat percentiles (msec): 00:24:20.522 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 47], 00:24:20.522 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 83], 60.00th=[ 95], 00:24:20.522 | 70.00th=[ 106], 80.00th=[ 117], 90.00th=[ 136], 95.00th=[ 157], 00:24:20.522 | 99.00th=[ 188], 99.50th=[ 203], 99.90th=[ 222], 99.95th=[ 255], 00:24:20.522 | 99.99th=[ 262] 00:24:20.522 bw ( KiB/s): min=107520, max=323072, per=8.31%, avg=194048.00, stdev=57563.97, samples=20 00:24:20.522 iops : min= 420, max= 1262, avg=758.00, stdev=224.86, samples=20 00:24:20.522 lat (msec) : 2=0.17%, 4=0.92%, 10=1.71%, 20=3.66%, 50=15.18% 00:24:20.522 lat (msec) : 100=43.31%, 250=35.00%, 500=0.05% 00:24:20.522 cpu : usr=0.35%, sys=2.72%, ctx=2174, majf=0, minf=4097 00:24:20.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.522 issued rwts: total=7643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.522 job8: (groupid=0, jobs=1): err= 0: pid=1670011: Thu Jul 11 13:54:21 2024 00:24:20.522 read: IOPS=903, BW=226MiB/s (237MB/s)(2278MiB/10091msec) 00:24:20.522 slat (usec): min=8, max=120491, avg=616.09, stdev=2892.41 00:24:20.522 clat (usec): min=1957, max=262043, avg=70140.31, stdev=35661.52 00:24:20.522 lat (msec): min=2, max=262, avg=70.76, stdev=35.97 00:24:20.522 clat percentiles (msec): 00:24:20.522 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 28], 20.00th=[ 40], 00:24:20.522 | 30.00th=[ 45], 40.00th=[ 53], 50.00th=[ 68], 60.00th=[ 81], 00:24:20.522 | 70.00th=[ 90], 80.00th=[ 103], 90.00th=[ 118], 95.00th=[ 130], 00:24:20.522 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 197], 99.95th=[ 207], 00:24:20.522 | 99.99th=[ 262] 00:24:20.522 bw ( KiB/s): min=153600, max=419840, per=9.92%, avg=231701.00, stdev=72122.87, samples=20 00:24:20.522 iops : min= 600, max= 1640, avg=905.05, stdev=281.74, samples=20 00:24:20.522 lat (msec) : 2=0.01%, 4=0.22%, 10=1.89%, 20=4.83%, 50=30.70% 00:24:20.522 lat (msec) : 100=40.54%, 250=21.79%, 500=0.02% 00:24:20.522 cpu : usr=0.33%, sys=2.96%, ctx=2636, majf=0, minf=4097 00:24:20.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.523 issued rwts: total=9113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.523 job9: (groupid=0, jobs=1): err= 0: pid=1670012: Thu Jul 11 13:54:21 2024 00:24:20.523 read: IOPS=831, BW=208MiB/s (218MB/s)(2098MiB/10095msec) 00:24:20.523 slat (usec): min=10, max=96062, avg=781.94, stdev=3399.91 00:24:20.523 clat (usec): min=1788, max=240010, avg=76090.68, stdev=43787.26 00:24:20.523 lat (usec): min=1833, max=240051, avg=76872.62, stdev=44295.18 00:24:20.523 clat percentiles (msec): 00:24:20.523 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 31], 00:24:20.523 | 30.00th=[ 45], 40.00th=[ 68], 50.00th=[ 82], 60.00th=[ 91], 00:24:20.523 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 126], 95.00th=[ 153], 00:24:20.523 | 99.00th=[ 201], 99.50th=[ 213], 99.90th=[ 224], 99.95th=[ 224], 00:24:20.523 | 99.99th=[ 241] 00:24:20.523 bw ( KiB/s): min=106496, max=419328, per=9.13%, avg=213222.40, stdev=82649.96, samples=20 00:24:20.523 iops : min= 416, max= 1638, avg=832.90, stdev=322.85, samples=20 00:24:20.523 lat (msec) : 2=0.01%, 4=0.38%, 10=4.31%, 20=6.55%, 50=20.75% 00:24:20.523 lat (msec) : 100=38.45%, 250=29.54% 00:24:20.523 cpu : usr=0.33%, sys=2.99%, ctx=2166, majf=0, minf=4097 00:24:20.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:20.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.523 issued rwts: total=8392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.523 job10: (groupid=0, jobs=1): err= 0: pid=1670013: Thu Jul 11 13:54:21 2024 00:24:20.523 read: IOPS=870, BW=218MiB/s (228MB/s)(2187MiB/10045msec) 00:24:20.523 slat (usec): min=9, max=73892, avg=979.12, stdev=3464.72 00:24:20.523 clat (msec): min=2, max=247, avg=72.44, stdev=42.53 00:24:20.523 lat (msec): min=2, max=247, avg=73.42, stdev=43.08 00:24:20.523 clat percentiles (msec): 00:24:20.523 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 33], 00:24:20.523 | 30.00th=[ 43], 40.00th=[ 52], 50.00th=[ 63], 60.00th=[ 75], 00:24:20.523 | 70.00th=[ 92], 80.00th=[ 113], 90.00th=[ 133], 95.00th=[ 155], 00:24:20.523 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 218], 99.95th=[ 220], 00:24:20.523 | 99.99th=[ 249] 00:24:20.523 bw ( KiB/s): min=99328, max=482304, per=9.52%, avg=222336.00, stdev=106535.62, samples=20 00:24:20.523 iops : min= 388, max= 1884, avg=868.50, stdev=416.15, samples=20 00:24:20.523 lat (msec) : 4=0.14%, 10=0.75%, 20=2.35%, 50=35.17%, 100=35.12% 00:24:20.523 lat (msec) : 250=26.46% 00:24:20.523 cpu : usr=0.33%, sys=3.28%, ctx=1948, majf=0, minf=4097 00:24:20.523 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:20.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:20.523 issued rwts: total=8748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.523 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:20.523 00:24:20.523 Run status group 0 (all jobs): 00:24:20.523 READ: bw=2281MiB/s (2392MB/s), 177MiB/s-236MiB/s (186MB/s-248MB/s), io=22.6GiB (24.2GB), run=10026-10123msec 00:24:20.523 00:24:20.523 Disk stats (read/write): 00:24:20.523 nvme0n1: ios=15966/0, merge=0/0, ticks=1240553/0, in_queue=1240553, util=97.23% 00:24:20.523 nvme10n1: ios=15994/0, merge=0/0, ticks=1238412/0, in_queue=1238412, util=97.36% 00:24:20.523 nvme1n1: ios=18880/0, merge=0/0, ticks=1231104/0, in_queue=1231104, util=97.61% 00:24:20.523 nvme2n1: ios=15600/0, merge=0/0, ticks=1239699/0, in_queue=1239699, util=97.83% 00:24:20.523 nvme3n1: ios=18547/0, merge=0/0, ticks=1237604/0, in_queue=1237604, util=97.85% 00:24:20.523 nvme4n1: ios=14157/0, merge=0/0, ticks=1236830/0, in_queue=1236830, util=98.20% 00:24:20.523 nvme5n1: ios=16380/0, merge=0/0, ticks=1233484/0, in_queue=1233484, util=98.39% 00:24:20.523 nvme6n1: ios=15015/0, merge=0/0, ticks=1239138/0, in_queue=1239138, util=98.47% 00:24:20.523 nvme7n1: ios=18012/0, merge=0/0, ticks=1238462/0, in_queue=1238462, util=98.87% 00:24:20.523 nvme8n1: ios=16617/0, merge=0/0, ticks=1232877/0, in_queue=1232877, util=99.10% 00:24:20.523 nvme9n1: ios=17210/0, merge=0/0, ticks=1233785/0, in_queue=1233785, util=99.18% 00:24:20.523 13:54:21 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:20.523 [global] 00:24:20.523 thread=1 00:24:20.523 invalidate=1 00:24:20.523 rw=randwrite 00:24:20.523 time_based=1 00:24:20.523 runtime=10 00:24:20.523 ioengine=libaio 00:24:20.523 direct=1 00:24:20.523 bs=262144 00:24:20.523 iodepth=64 00:24:20.523 norandommap=1 00:24:20.523 numjobs=1 00:24:20.523 00:24:20.523 [job0] 00:24:20.523 filename=/dev/nvme0n1 00:24:20.523 [job1] 00:24:20.523 filename=/dev/nvme10n1 00:24:20.523 [job2] 00:24:20.523 filename=/dev/nvme1n1 00:24:20.523 [job3] 00:24:20.523 filename=/dev/nvme2n1 00:24:20.523 [job4] 00:24:20.523 filename=/dev/nvme3n1 00:24:20.523 [job5] 00:24:20.523 filename=/dev/nvme4n1 00:24:20.523 [job6] 00:24:20.523 filename=/dev/nvme5n1 00:24:20.523 [job7] 00:24:20.523 filename=/dev/nvme6n1 00:24:20.523 [job8] 00:24:20.523 filename=/dev/nvme7n1 00:24:20.523 [job9] 00:24:20.523 filename=/dev/nvme8n1 00:24:20.523 [job10] 00:24:20.523 filename=/dev/nvme9n1 00:24:20.523 Could not set queue depth (nvme0n1) 00:24:20.523 Could not set queue depth (nvme10n1) 00:24:20.523 Could not set queue depth (nvme1n1) 00:24:20.523 Could not set queue depth (nvme2n1) 00:24:20.523 Could not set queue depth (nvme3n1) 00:24:20.523 Could not set queue depth (nvme4n1) 00:24:20.523 Could not set queue depth (nvme5n1) 00:24:20.523 Could not set queue depth (nvme6n1) 00:24:20.523 Could not set queue depth (nvme7n1) 00:24:20.523 Could not set queue depth (nvme8n1) 00:24:20.523 Could not set queue depth (nvme9n1) 00:24:20.523 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:20.523 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:20.523 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:20.523 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:20.523 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:20.523 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:20.523 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:20.523 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:20.523 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:20.523 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:20.523 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:20.523 fio-3.35 00:24:20.523 Starting 11 threads 00:24:30.533 00:24:30.533 job0: (groupid=0, jobs=1): err= 0: pid=1671555: Thu Jul 11 13:54:32 2024 00:24:30.533 write: IOPS=546, BW=137MiB/s (143MB/s)(1378MiB/10083msec); 0 zone resets 00:24:30.533 slat (usec): min=27, max=73840, avg=1478.05, stdev=3620.76 00:24:30.533 clat (usec): min=1474, max=289116, avg=115545.91, stdev=56766.89 00:24:30.533 lat (usec): min=1599, max=300812, avg=117023.96, stdev=57556.15 00:24:30.533 clat percentiles (msec): 00:24:30.533 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 29], 20.00th=[ 58], 00:24:30.533 | 30.00th=[ 95], 40.00th=[ 103], 50.00th=[ 124], 60.00th=[ 138], 00:24:30.533 | 70.00th=[ 155], 80.00th=[ 165], 90.00th=[ 182], 95.00th=[ 197], 00:24:30.533 | 99.00th=[ 232], 99.50th=[ 255], 99.90th=[ 284], 99.95th=[ 288], 00:24:30.533 | 99.99th=[ 288] 00:24:30.533 bw ( KiB/s): min=94208, max=232960, per=7.80%, avg=139481.80, stdev=42472.19, samples=20 00:24:30.533 iops : min= 368, max= 910, avg=544.85, stdev=165.91, samples=20 00:24:30.533 lat (msec) : 2=0.05%, 4=0.69%, 10=2.18%, 20=4.46%, 50=10.56% 00:24:30.533 lat (msec) : 100=16.31%, 250=65.14%, 500=0.60% 00:24:30.533 cpu : usr=1.36%, sys=1.72%, ctx=2765, majf=0, minf=1 00:24:30.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:30.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.533 issued rwts: total=0,5511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.533 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.533 job1: (groupid=0, jobs=1): err= 0: pid=1671556: Thu Jul 11 13:54:32 2024 00:24:30.533 write: IOPS=638, BW=160MiB/s (167MB/s)(1617MiB/10124msec); 0 zone resets 00:24:30.533 slat (usec): min=29, max=49968, avg=1297.23, stdev=2735.30 00:24:30.533 clat (usec): min=1545, max=284262, avg=98845.08, stdev=38950.23 00:24:30.533 lat (usec): min=1601, max=284309, avg=100142.31, stdev=39369.76 00:24:30.533 clat percentiles (msec): 00:24:30.533 | 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 62], 20.00th=[ 71], 00:24:30.533 | 30.00th=[ 73], 40.00th=[ 90], 50.00th=[ 101], 60.00th=[ 105], 00:24:30.534 | 70.00th=[ 118], 80.00th=[ 130], 90.00th=[ 138], 95.00th=[ 161], 00:24:30.534 | 99.00th=[ 222], 99.50th=[ 228], 99.90th=[ 262], 99.95th=[ 275], 00:24:30.534 | 99.99th=[ 284] 00:24:30.534 bw ( KiB/s): min=111104, max=233472, per=9.16%, avg=163959.55, stdev=35348.78, samples=20 00:24:30.534 iops : min= 434, max= 912, avg=640.45, stdev=138.08, samples=20 00:24:30.534 lat (msec) : 2=0.05%, 4=0.34%, 10=0.71%, 20=1.70%, 50=4.93% 00:24:30.534 lat (msec) : 100=40.02%, 250=52.10%, 500=0.15% 00:24:30.534 cpu : usr=2.09%, sys=2.03%, ctx=2584, majf=0, minf=1 00:24:30.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:30.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.534 issued rwts: total=0,6467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.534 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.534 job2: (groupid=0, jobs=1): err= 0: pid=1671568: Thu Jul 11 13:54:32 2024 00:24:30.534 write: IOPS=555, BW=139MiB/s (146MB/s)(1406MiB/10126msec); 0 zone resets 00:24:30.534 slat (usec): min=23, max=185053, avg=1379.41, stdev=4449.81 00:24:30.534 clat (usec): min=1896, max=445940, avg=113811.30, stdev=57928.93 00:24:30.534 lat (usec): min=1953, max=446000, avg=115190.71, stdev=58735.09 00:24:30.534 clat percentiles (msec): 00:24:30.534 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 69], 00:24:30.534 | 30.00th=[ 94], 40.00th=[ 103], 50.00th=[ 108], 60.00th=[ 124], 00:24:30.534 | 70.00th=[ 133], 80.00th=[ 155], 90.00th=[ 188], 95.00th=[ 224], 00:24:30.534 | 99.00th=[ 275], 99.50th=[ 309], 99.90th=[ 422], 99.95th=[ 435], 00:24:30.534 | 99.99th=[ 447] 00:24:30.534 bw ( KiB/s): min=55296, max=231936, per=7.95%, avg=142299.05, stdev=44982.87, samples=20 00:24:30.534 iops : min= 216, max= 906, avg=555.85, stdev=175.72, samples=20 00:24:30.534 lat (msec) : 2=0.02%, 4=0.12%, 10=1.62%, 20=2.51%, 50=10.39% 00:24:30.534 lat (msec) : 100=21.61%, 250=62.17%, 500=1.57% 00:24:30.534 cpu : usr=1.49%, sys=1.75%, ctx=2867, majf=0, minf=1 00:24:30.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:30.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.534 issued rwts: total=0,5622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.534 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.534 job3: (groupid=0, jobs=1): err= 0: pid=1671569: Thu Jul 11 13:54:32 2024 00:24:30.534 write: IOPS=474, BW=119MiB/s (124MB/s)(1199MiB/10101msec); 0 zone resets 00:24:30.534 slat (usec): min=18, max=39068, avg=1915.95, stdev=3993.03 00:24:30.534 clat (msec): min=2, max=237, avg=132.88, stdev=49.68 00:24:30.534 lat (msec): min=2, max=252, avg=134.80, stdev=50.39 00:24:30.534 clat percentiles (msec): 00:24:30.534 | 1.00th=[ 8], 5.00th=[ 34], 10.00th=[ 62], 20.00th=[ 96], 00:24:30.534 | 30.00th=[ 105], 40.00th=[ 131], 50.00th=[ 140], 60.00th=[ 155], 00:24:30.534 | 70.00th=[ 163], 80.00th=[ 174], 90.00th=[ 194], 95.00th=[ 203], 00:24:30.534 | 99.00th=[ 230], 99.50th=[ 234], 99.90th=[ 236], 99.95th=[ 239], 00:24:30.534 | 99.99th=[ 239] 00:24:30.534 bw ( KiB/s): min=89088, max=225280, per=6.77%, avg=121113.60, stdev=38927.41, samples=20 00:24:30.534 iops : min= 348, max= 880, avg=473.10, stdev=152.06, samples=20 00:24:30.534 lat (msec) : 4=0.15%, 10=1.08%, 20=1.31%, 50=5.21%, 100=15.46% 00:24:30.534 lat (msec) : 250=76.78% 00:24:30.534 cpu : usr=1.61%, sys=1.50%, ctx=1782, majf=0, minf=1 00:24:30.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:30.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.534 issued rwts: total=0,4794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.534 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.534 job4: (groupid=0, jobs=1): err= 0: pid=1671570: Thu Jul 11 13:54:32 2024 00:24:30.534 write: IOPS=645, BW=161MiB/s (169MB/s)(1634MiB/10125msec); 0 zone resets 00:24:30.534 slat (usec): min=28, max=141228, avg=1172.76, stdev=4257.90 00:24:30.534 clat (msec): min=3, max=283, avg=97.89, stdev=47.11 00:24:30.534 lat (msec): min=3, max=283, avg=99.06, stdev=47.62 00:24:30.534 clat percentiles (msec): 00:24:30.534 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 40], 20.00th=[ 59], 00:24:30.534 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 91], 60.00th=[ 110], 00:24:30.534 | 70.00th=[ 127], 80.00th=[ 138], 90.00th=[ 163], 95.00th=[ 180], 00:24:30.534 | 99.00th=[ 211], 99.50th=[ 243], 99.90th=[ 271], 99.95th=[ 275], 00:24:30.534 | 99.99th=[ 284] 00:24:30.534 bw ( KiB/s): min=93696, max=306176, per=9.26%, avg=165734.40, stdev=55948.15, samples=20 00:24:30.534 iops : min= 366, max= 1196, avg=647.40, stdev=218.55, samples=20 00:24:30.534 lat (msec) : 4=0.03%, 10=0.49%, 20=1.96%, 50=13.91%, 100=38.52% 00:24:30.534 lat (msec) : 250=44.76%, 500=0.34% 00:24:30.534 cpu : usr=1.36%, sys=2.00%, ctx=2969, majf=0, minf=1 00:24:30.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:30.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.534 issued rwts: total=0,6537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.534 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.534 job5: (groupid=0, jobs=1): err= 0: pid=1671575: Thu Jul 11 13:54:32 2024 00:24:30.534 write: IOPS=641, BW=160MiB/s (168MB/s)(1620MiB/10099msec); 0 zone resets 00:24:30.534 slat (usec): min=21, max=50652, avg=1465.67, stdev=2917.22 00:24:30.534 clat (msec): min=3, max=253, avg=98.25, stdev=40.11 00:24:30.534 lat (msec): min=3, max=256, avg=99.72, stdev=40.62 00:24:30.534 clat percentiles (msec): 00:24:30.534 | 1.00th=[ 14], 5.00th=[ 42], 10.00th=[ 50], 20.00th=[ 70], 00:24:30.534 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 101], 60.00th=[ 104], 00:24:30.534 | 70.00th=[ 115], 80.00th=[ 127], 90.00th=[ 140], 95.00th=[ 174], 00:24:30.534 | 99.00th=[ 232], 99.50th=[ 247], 99.90th=[ 251], 99.95th=[ 253], 00:24:30.534 | 99.99th=[ 253] 00:24:30.534 bw ( KiB/s): min=87040, max=247296, per=9.18%, avg=164249.60, stdev=48445.33, samples=20 00:24:30.534 iops : min= 340, max= 966, avg=641.60, stdev=189.24, samples=20 00:24:30.534 lat (msec) : 4=0.02%, 10=0.43%, 20=1.39%, 50=8.29%, 100=39.40% 00:24:30.534 lat (msec) : 250=50.27%, 500=0.20% 00:24:30.534 cpu : usr=1.69%, sys=2.08%, ctx=2031, majf=0, minf=1 00:24:30.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:30.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.534 issued rwts: total=0,6479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.534 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.534 job6: (groupid=0, jobs=1): err= 0: pid=1671576: Thu Jul 11 13:54:32 2024 00:24:30.534 write: IOPS=599, BW=150MiB/s (157MB/s)(1518MiB/10124msec); 0 zone resets 00:24:30.534 slat (usec): min=21, max=47470, avg=1299.32, stdev=3106.29 00:24:30.534 clat (usec): min=1241, max=285411, avg=105382.56, stdev=48470.95 00:24:30.534 lat (usec): min=1298, max=285459, avg=106681.89, stdev=49058.50 00:24:30.534 clat percentiles (msec): 00:24:30.534 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 35], 20.00th=[ 68], 00:24:30.534 | 30.00th=[ 79], 40.00th=[ 100], 50.00th=[ 106], 60.00th=[ 118], 00:24:30.534 | 70.00th=[ 131], 80.00th=[ 140], 90.00th=[ 169], 95.00th=[ 194], 00:24:30.534 | 99.00th=[ 213], 99.50th=[ 226], 99.90th=[ 264], 99.95th=[ 275], 00:24:30.534 | 99.99th=[ 288] 00:24:30.534 bw ( KiB/s): min=94208, max=233984, per=8.60%, avg=153804.80, stdev=42742.33, samples=20 00:24:30.534 iops : min= 368, max= 914, avg=600.80, stdev=166.96, samples=20 00:24:30.534 lat (msec) : 2=0.08%, 4=0.28%, 10=2.14%, 20=3.44%, 50=7.25% 00:24:30.534 lat (msec) : 100=27.99%, 250=58.59%, 500=0.23% 00:24:30.534 cpu : usr=1.53%, sys=1.89%, ctx=2942, majf=0, minf=1 00:24:30.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:30.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.534 issued rwts: total=0,6071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.534 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.534 job7: (groupid=0, jobs=1): err= 0: pid=1671577: Thu Jul 11 13:54:32 2024 00:24:30.534 write: IOPS=918, BW=230MiB/s (241MB/s)(2317MiB/10085msec); 0 zone resets 00:24:30.534 slat (usec): min=22, max=46111, avg=900.99, stdev=2239.64 00:24:30.534 clat (usec): min=1238, max=239633, avg=68709.99, stdev=45317.98 00:24:30.534 lat (usec): min=1274, max=239770, avg=69610.98, stdev=45851.15 00:24:30.534 clat percentiles (msec): 00:24:30.534 | 1.00th=[ 5], 5.00th=[ 20], 10.00th=[ 37], 20.00th=[ 40], 00:24:30.534 | 30.00th=[ 41], 40.00th=[ 43], 50.00th=[ 49], 60.00th=[ 68], 00:24:30.534 | 70.00th=[ 73], 80.00th=[ 99], 90.00th=[ 146], 95.00th=[ 171], 00:24:30.534 | 99.00th=[ 209], 99.50th=[ 222], 99.90th=[ 236], 99.95th=[ 239], 00:24:30.534 | 99.99th=[ 241] 00:24:30.534 bw ( KiB/s): min=92160, max=397312, per=13.17%, avg=235596.80, stdev=101291.62, samples=20 00:24:30.534 iops : min= 360, max= 1552, avg=920.30, stdev=395.67, samples=20 00:24:30.534 lat (msec) : 2=0.11%, 4=0.55%, 10=1.95%, 20=2.54%, 50=45.56% 00:24:30.534 lat (msec) : 100=29.79%, 250=19.50% 00:24:30.534 cpu : usr=2.05%, sys=2.57%, ctx=3762, majf=0, minf=1 00:24:30.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:30.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.534 issued rwts: total=0,9266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.534 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.534 job8: (groupid=0, jobs=1): err= 0: pid=1671584: Thu Jul 11 13:54:32 2024 00:24:30.534 write: IOPS=581, BW=145MiB/s (152MB/s)(1468MiB/10100msec); 0 zone resets 00:24:30.534 slat (usec): min=26, max=71419, avg=1260.76, stdev=3371.96 00:24:30.534 clat (usec): min=1487, max=288676, avg=108766.79, stdev=52046.57 00:24:30.534 lat (usec): min=1560, max=288814, avg=110027.56, stdev=52682.96 00:24:30.534 clat percentiles (msec): 00:24:30.534 | 1.00th=[ 7], 5.00th=[ 28], 10.00th=[ 41], 20.00th=[ 71], 00:24:30.534 | 30.00th=[ 81], 40.00th=[ 97], 50.00th=[ 103], 60.00th=[ 111], 00:24:30.534 | 70.00th=[ 127], 80.00th=[ 155], 90.00th=[ 180], 95.00th=[ 207], 00:24:30.534 | 99.00th=[ 241], 99.50th=[ 255], 99.90th=[ 275], 99.95th=[ 275], 00:24:30.534 | 99.99th=[ 288] 00:24:30.534 bw ( KiB/s): min=84480, max=232960, per=8.31%, avg=148752.05, stdev=43101.51, samples=20 00:24:30.534 iops : min= 330, max= 910, avg=581.05, stdev=168.36, samples=20 00:24:30.534 lat (msec) : 2=0.03%, 4=0.17%, 10=1.69%, 20=1.45%, 50=9.16% 00:24:30.534 lat (msec) : 100=31.33%, 250=55.59%, 500=0.58% 00:24:30.534 cpu : usr=1.29%, sys=1.80%, ctx=3082, majf=0, minf=1 00:24:30.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:30.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.534 issued rwts: total=0,5873,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.534 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.535 job9: (groupid=0, jobs=1): err= 0: pid=1671585: Thu Jul 11 13:54:32 2024 00:24:30.535 write: IOPS=668, BW=167MiB/s (175MB/s)(1678MiB/10035msec); 0 zone resets 00:24:30.535 slat (usec): min=20, max=49316, avg=1454.29, stdev=3052.23 00:24:30.535 clat (msec): min=2, max=239, avg=94.22, stdev=48.21 00:24:30.535 lat (msec): min=4, max=239, avg=95.67, stdev=48.86 00:24:30.535 clat percentiles (msec): 00:24:30.535 | 1.00th=[ 26], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 43], 00:24:30.535 | 30.00th=[ 59], 40.00th=[ 73], 50.00th=[ 85], 60.00th=[ 106], 00:24:30.535 | 70.00th=[ 120], 80.00th=[ 140], 90.00th=[ 163], 95.00th=[ 182], 00:24:30.535 | 99.00th=[ 211], 99.50th=[ 226], 99.90th=[ 239], 99.95th=[ 241], 00:24:30.535 | 99.99th=[ 241] 00:24:30.535 bw ( KiB/s): min=77824, max=384000, per=9.51%, avg=170203.35, stdev=90752.79, samples=20 00:24:30.535 iops : min= 304, max= 1500, avg=664.85, stdev=354.51, samples=20 00:24:30.535 lat (msec) : 4=0.01%, 10=0.51%, 20=0.30%, 50=26.55%, 100=29.00% 00:24:30.535 lat (msec) : 250=43.63% 00:24:30.535 cpu : usr=1.71%, sys=2.12%, ctx=1883, majf=0, minf=1 00:24:30.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:30.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.535 issued rwts: total=0,6711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.535 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.535 job10: (groupid=0, jobs=1): err= 0: pid=1671586: Thu Jul 11 13:54:32 2024 00:24:30.535 write: IOPS=737, BW=184MiB/s (193MB/s)(1859MiB/10083msec); 0 zone resets 00:24:30.535 slat (usec): min=22, max=77589, avg=1234.38, stdev=2969.00 00:24:30.535 clat (usec): min=1457, max=262978, avg=85505.11, stdev=52000.44 00:24:30.535 lat (usec): min=1528, max=263050, avg=86739.48, stdev=52715.13 00:24:30.535 clat percentiles (msec): 00:24:30.535 | 1.00th=[ 15], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 42], 00:24:30.535 | 30.00th=[ 44], 40.00th=[ 57], 50.00th=[ 70], 60.00th=[ 79], 00:24:30.535 | 70.00th=[ 104], 80.00th=[ 129], 90.00th=[ 169], 95.00th=[ 199], 00:24:30.535 | 99.00th=[ 228], 99.50th=[ 257], 99.90th=[ 264], 99.95th=[ 264], 00:24:30.535 | 99.99th=[ 264] 00:24:30.535 bw ( KiB/s): min=73728, max=375808, per=10.55%, avg=188774.40, stdev=93117.21, samples=20 00:24:30.535 iops : min= 288, max= 1468, avg=737.40, stdev=363.74, samples=20 00:24:30.535 lat (msec) : 2=0.04%, 4=0.22%, 10=0.48%, 20=0.69%, 50=34.49% 00:24:30.535 lat (msec) : 100=30.13%, 250=33.39%, 500=0.56% 00:24:30.535 cpu : usr=2.40%, sys=2.17%, ctx=2452, majf=0, minf=1 00:24:30.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:30.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:30.535 issued rwts: total=0,7437,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.535 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:30.535 00:24:30.535 Run status group 0 (all jobs): 00:24:30.535 WRITE: bw=1747MiB/s (1832MB/s), 119MiB/s-230MiB/s (124MB/s-241MB/s), io=17.3GiB (18.6GB), run=10035-10126msec 00:24:30.535 00:24:30.535 Disk stats (read/write): 00:24:30.535 nvme0n1: ios=47/10663, merge=0/0, ticks=1323/1205508, in_queue=1206831, util=99.32% 00:24:30.535 nvme10n1: ios=49/12907, merge=0/0, ticks=61/1235264, in_queue=1235325, util=95.06% 00:24:30.535 nvme1n1: ios=48/11227, merge=0/0, ticks=3588/1230712, in_queue=1234300, util=99.94% 00:24:30.535 nvme2n1: ios=0/9316, merge=0/0, ticks=0/1198476, in_queue=1198476, util=95.57% 00:24:30.535 nvme3n1: ios=54/13048, merge=0/0, ticks=3329/1189289, in_queue=1192618, util=100.00% 00:24:30.535 nvme4n1: ios=0/12704, merge=0/0, ticks=0/1198836, in_queue=1198836, util=96.59% 00:24:30.535 nvme5n1: ios=0/12116, merge=0/0, ticks=0/1238301, in_queue=1238301, util=97.05% 00:24:30.535 nvme6n1: ios=41/18053, merge=0/0, ticks=602/1204692, in_queue=1205294, util=100.00% 00:24:30.535 nvme7n1: ios=0/11418, merge=0/0, ticks=0/1207400, in_queue=1207400, util=98.26% 00:24:30.535 nvme8n1: ios=0/13008, merge=0/0, ticks=0/1200737, in_queue=1200737, util=98.72% 00:24:30.535 nvme9n1: ios=0/14494, merge=0/0, ticks=0/1203895, in_queue=1203895, util=99.08% 00:24:30.535 13:54:32 -- target/multiconnection.sh@36 -- # sync 00:24:30.535 13:54:32 -- target/multiconnection.sh@37 -- # seq 1 11 00:24:30.535 13:54:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.535 13:54:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:30.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:30.535 13:54:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:30.535 13:54:32 -- common/autotest_common.sh@1198 -- # local i=0 00:24:30.535 13:54:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:30.535 13:54:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:24:30.535 13:54:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:24:30.535 13:54:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:30.535 13:54:32 -- common/autotest_common.sh@1210 -- # return 0 00:24:30.535 13:54:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:30.535 13:54:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:30.535 13:54:32 -- common/autotest_common.sh@10 -- # set +x 00:24:30.535 13:54:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:30.535 13:54:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.535 13:54:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:30.535 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:30.535 13:54:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:30.535 13:54:32 -- common/autotest_common.sh@1198 -- # local i=0 00:24:30.535 13:54:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:30.535 13:54:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:24:30.535 13:54:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:30.535 13:54:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:24:30.535 13:54:32 -- common/autotest_common.sh@1210 -- # return 0 00:24:30.794 13:54:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:30.794 13:54:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:30.794 13:54:32 -- common/autotest_common.sh@10 -- # set +x 00:24:30.794 13:54:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:30.794 13:54:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.794 13:54:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:31.054 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:31.054 13:54:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:31.054 13:54:33 -- common/autotest_common.sh@1198 -- # local i=0 00:24:31.054 13:54:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:24:31.054 13:54:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:31.054 13:54:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:31.054 13:54:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:24:31.054 13:54:33 -- common/autotest_common.sh@1210 -- # return 0 00:24:31.054 13:54:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:31.054 13:54:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:31.054 13:54:33 -- common/autotest_common.sh@10 -- # set +x 00:24:31.054 13:54:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:31.054 13:54:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.054 13:54:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:31.314 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:31.314 13:54:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:31.314 13:54:33 -- common/autotest_common.sh@1198 -- # local i=0 00:24:31.314 13:54:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:31.314 13:54:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:24:31.314 13:54:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:31.314 13:54:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:24:31.314 13:54:33 -- common/autotest_common.sh@1210 -- # return 0 00:24:31.314 13:54:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:31.314 13:54:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:31.314 13:54:33 -- common/autotest_common.sh@10 -- # set +x 00:24:31.314 13:54:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:31.314 13:54:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.314 13:54:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:31.573 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:31.573 13:54:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:31.573 13:54:33 -- common/autotest_common.sh@1198 -- # local i=0 00:24:31.573 13:54:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:31.573 13:54:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:24:31.573 13:54:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:31.573 13:54:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:24:31.573 13:54:33 -- common/autotest_common.sh@1210 -- # return 0 00:24:31.573 13:54:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:31.573 13:54:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:31.573 13:54:33 -- common/autotest_common.sh@10 -- # set +x 00:24:31.573 13:54:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:31.573 13:54:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.573 13:54:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:31.832 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:31.832 13:54:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:31.832 13:54:34 -- common/autotest_common.sh@1198 -- # local i=0 00:24:31.832 13:54:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:31.832 13:54:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:24:31.832 13:54:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:24:31.832 13:54:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:31.832 13:54:34 -- common/autotest_common.sh@1210 -- # return 0 00:24:31.832 13:54:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:31.832 13:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:31.832 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:24:31.832 13:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:31.832 13:54:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.832 13:54:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:32.091 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:32.091 13:54:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:32.092 13:54:34 -- common/autotest_common.sh@1198 -- # local i=0 00:24:32.092 13:54:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:32.092 13:54:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:24:32.092 13:54:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:32.092 13:54:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:24:32.092 13:54:34 -- common/autotest_common.sh@1210 -- # return 0 00:24:32.092 13:54:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:32.092 13:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.092 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:24:32.092 13:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.092 13:54:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.092 13:54:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:32.092 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:32.092 13:54:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:32.092 13:54:34 -- common/autotest_common.sh@1198 -- # local i=0 00:24:32.092 13:54:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:32.092 13:54:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:24:32.092 13:54:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:32.092 13:54:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:24:32.350 13:54:34 -- common/autotest_common.sh@1210 -- # return 0 00:24:32.350 13:54:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:32.350 13:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.350 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:24:32.350 13:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.350 13:54:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.350 13:54:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:32.350 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:32.350 13:54:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:32.350 13:54:34 -- common/autotest_common.sh@1198 -- # local i=0 00:24:32.350 13:54:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:32.350 13:54:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:24:32.350 13:54:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:32.350 13:54:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:24:32.350 13:54:34 -- common/autotest_common.sh@1210 -- # return 0 00:24:32.350 13:54:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:32.350 13:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.350 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:24:32.350 13:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.350 13:54:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.350 13:54:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:32.609 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:32.609 13:54:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:32.609 13:54:34 -- common/autotest_common.sh@1198 -- # local i=0 00:24:32.609 13:54:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:32.609 13:54:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:24:32.609 13:54:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:32.609 13:54:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:24:32.609 13:54:34 -- common/autotest_common.sh@1210 -- # return 0 00:24:32.609 13:54:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:32.609 13:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.609 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:24:32.609 13:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.609 13:54:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.609 13:54:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:32.609 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:32.609 13:54:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:32.609 13:54:34 -- common/autotest_common.sh@1198 -- # local i=0 00:24:32.609 13:54:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:32.609 13:54:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:24:32.609 13:54:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:32.609 13:54:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:24:32.609 13:54:35 -- common/autotest_common.sh@1210 -- # return 0 00:24:32.609 13:54:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:32.609 13:54:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.609 13:54:35 -- common/autotest_common.sh@10 -- # set +x 00:24:32.609 13:54:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.609 13:54:35 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:32.609 13:54:35 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:32.609 13:54:35 -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:32.609 13:54:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:32.609 13:54:35 -- nvmf/common.sh@116 -- # sync 00:24:32.609 13:54:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:32.609 13:54:35 -- nvmf/common.sh@119 -- # set +e 00:24:32.609 13:54:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:32.609 13:54:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:32.609 rmmod nvme_tcp 00:24:32.609 rmmod nvme_fabrics 00:24:32.868 rmmod nvme_keyring 00:24:32.868 13:54:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:32.868 13:54:35 -- nvmf/common.sh@123 -- # set -e 00:24:32.868 13:54:35 -- nvmf/common.sh@124 -- # return 0 00:24:32.868 13:54:35 -- nvmf/common.sh@477 -- # '[' -n 1662690 ']' 00:24:32.868 13:54:35 -- nvmf/common.sh@478 -- # killprocess 1662690 00:24:32.868 13:54:35 -- common/autotest_common.sh@926 -- # '[' -z 1662690 ']' 00:24:32.868 13:54:35 -- common/autotest_common.sh@930 -- # kill -0 1662690 00:24:32.869 13:54:35 -- common/autotest_common.sh@931 -- # uname 00:24:32.869 13:54:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:32.869 13:54:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1662690 00:24:32.869 13:54:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:32.869 13:54:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:32.869 13:54:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1662690' 00:24:32.869 killing process with pid 1662690 00:24:32.869 13:54:35 -- common/autotest_common.sh@945 -- # kill 1662690 00:24:32.869 13:54:35 -- common/autotest_common.sh@950 -- # wait 1662690 00:24:33.128 13:54:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:33.128 13:54:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:33.128 13:54:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:33.128 13:54:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.128 13:54:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:33.128 13:54:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.128 13:54:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.128 13:54:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.664 13:54:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:35.664 00:24:35.664 real 1m10.662s 00:24:35.664 user 4m11.081s 00:24:35.664 sys 0m25.364s 00:24:35.664 13:54:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.664 13:54:37 -- common/autotest_common.sh@10 -- # set +x 00:24:35.664 ************************************ 00:24:35.664 END TEST nvmf_multiconnection 00:24:35.664 ************************************ 00:24:35.664 13:54:37 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:35.664 13:54:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:35.664 13:54:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:35.664 13:54:37 -- common/autotest_common.sh@10 -- # set +x 00:24:35.664 ************************************ 00:24:35.664 START TEST nvmf_initiator_timeout 00:24:35.664 ************************************ 00:24:35.664 13:54:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:35.664 * Looking for test storage... 00:24:35.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:35.664 13:54:37 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.664 13:54:37 -- nvmf/common.sh@7 -- # uname -s 00:24:35.664 13:54:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.664 13:54:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.664 13:54:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.664 13:54:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.664 13:54:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.664 13:54:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.664 13:54:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.664 13:54:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.664 13:54:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.664 13:54:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.664 13:54:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:35.664 13:54:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:35.664 13:54:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.664 13:54:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.664 13:54:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.664 13:54:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.664 13:54:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.664 13:54:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.664 13:54:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.664 13:54:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.664 13:54:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.664 13:54:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.664 13:54:37 -- paths/export.sh@5 -- # export PATH 00:24:35.664 13:54:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.664 13:54:37 -- nvmf/common.sh@46 -- # : 0 00:24:35.664 13:54:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:35.664 13:54:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:35.664 13:54:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:35.664 13:54:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.664 13:54:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.664 13:54:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:35.664 13:54:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:35.664 13:54:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:35.664 13:54:37 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:35.664 13:54:37 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:35.664 13:54:37 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:35.664 13:54:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:35.664 13:54:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.664 13:54:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:35.664 13:54:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:35.664 13:54:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:35.664 13:54:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.664 13:54:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.664 13:54:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.664 13:54:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:35.664 13:54:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:35.664 13:54:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:35.664 13:54:37 -- common/autotest_common.sh@10 -- # set +x 00:24:40.939 13:54:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:40.939 13:54:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:40.939 13:54:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:40.939 13:54:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:40.939 13:54:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:40.939 13:54:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:40.939 13:54:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:40.939 13:54:42 -- nvmf/common.sh@294 -- # net_devs=() 00:24:40.939 13:54:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:40.939 13:54:42 -- nvmf/common.sh@295 -- # e810=() 00:24:40.939 13:54:42 -- nvmf/common.sh@295 -- # local -ga e810 00:24:40.939 13:54:42 -- nvmf/common.sh@296 -- # x722=() 00:24:40.939 13:54:42 -- nvmf/common.sh@296 -- # local -ga x722 00:24:40.939 13:54:42 -- nvmf/common.sh@297 -- # mlx=() 00:24:40.939 13:54:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:40.939 13:54:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.939 13:54:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.939 13:54:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.939 13:54:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.939 13:54:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.939 13:54:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.939 13:54:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.939 13:54:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.939 13:54:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.939 13:54:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.939 13:54:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.939 13:54:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:40.939 13:54:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:40.939 13:54:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:40.939 13:54:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:40.939 13:54:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:40.939 13:54:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:40.939 13:54:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.939 13:54:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:40.940 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:40.940 13:54:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.940 13:54:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:40.940 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:40.940 13:54:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:40.940 13:54:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.940 13:54:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.940 13:54:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.940 13:54:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.940 13:54:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:40.940 Found net devices under 0000:86:00.0: cvl_0_0 00:24:40.940 13:54:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.940 13:54:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.940 13:54:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.940 13:54:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.940 13:54:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.940 13:54:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:40.940 Found net devices under 0000:86:00.1: cvl_0_1 00:24:40.940 13:54:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.940 13:54:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:40.940 13:54:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:40.940 13:54:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:40.940 13:54:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.940 13:54:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.940 13:54:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.940 13:54:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:40.940 13:54:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.940 13:54:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.940 13:54:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:40.940 13:54:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.940 13:54:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.940 13:54:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:40.940 13:54:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:40.940 13:54:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.940 13:54:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.940 13:54:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.940 13:54:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.940 13:54:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:40.940 13:54:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.940 13:54:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.940 13:54:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.940 13:54:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:40.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:24:40.940 00:24:40.940 --- 10.0.0.2 ping statistics --- 00:24:40.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.940 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:24:40.940 13:54:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:24:40.940 00:24:40.940 --- 10.0.0.1 ping statistics --- 00:24:40.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.940 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:24:40.940 13:54:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.940 13:54:42 -- nvmf/common.sh@410 -- # return 0 00:24:40.940 13:54:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:40.940 13:54:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.940 13:54:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:40.940 13:54:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.940 13:54:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:40.940 13:54:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:40.940 13:54:42 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:40.940 13:54:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:40.940 13:54:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:40.940 13:54:42 -- common/autotest_common.sh@10 -- # set +x 00:24:40.940 13:54:42 -- nvmf/common.sh@469 -- # nvmfpid=1676881 00:24:40.940 13:54:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:40.940 13:54:42 -- nvmf/common.sh@470 -- # waitforlisten 1676881 00:24:40.940 13:54:42 -- common/autotest_common.sh@819 -- # '[' -z 1676881 ']' 00:24:40.940 13:54:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.940 13:54:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:40.940 13:54:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.940 13:54:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:40.940 13:54:42 -- common/autotest_common.sh@10 -- # set +x 00:24:40.940 [2024-07-11 13:54:42.982347] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:40.940 [2024-07-11 13:54:42.982389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.940 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.940 [2024-07-11 13:54:43.040108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:40.940 [2024-07-11 13:54:43.080623] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:40.940 [2024-07-11 13:54:43.080735] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.940 [2024-07-11 13:54:43.080744] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.940 [2024-07-11 13:54:43.080751] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.940 [2024-07-11 13:54:43.080791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.940 [2024-07-11 13:54:43.080901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.940 [2024-07-11 13:54:43.080915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.940 [2024-07-11 13:54:43.080917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.508 13:54:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:41.508 13:54:43 -- common/autotest_common.sh@852 -- # return 0 00:24:41.508 13:54:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:41.509 13:54:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:41.509 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:24:41.509 13:54:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.509 13:54:43 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:41.509 13:54:43 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:41.509 13:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.509 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:24:41.509 Malloc0 00:24:41.509 13:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.509 13:54:43 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:41.509 13:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.509 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:24:41.509 Delay0 00:24:41.509 13:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.509 13:54:43 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:41.509 13:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.509 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:24:41.509 [2024-07-11 13:54:43.851881] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.509 13:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.509 13:54:43 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:41.509 13:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.509 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:24:41.509 13:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.509 13:54:43 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:41.509 13:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.509 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:24:41.509 13:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.509 13:54:43 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.509 13:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.509 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:24:41.509 [2024-07-11 13:54:43.876870] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.509 13:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.509 13:54:43 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:42.887 13:54:45 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:42.887 13:54:45 -- common/autotest_common.sh@1177 -- # local i=0 00:24:42.887 13:54:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:42.887 13:54:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:42.887 13:54:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:44.794 13:54:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:44.794 13:54:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:44.794 13:54:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:24:44.794 13:54:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:44.794 13:54:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:44.794 13:54:47 -- common/autotest_common.sh@1187 -- # return 0 00:24:44.794 13:54:47 -- target/initiator_timeout.sh@35 -- # fio_pid=1677566 00:24:44.794 13:54:47 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:44.794 13:54:47 -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:44.794 [global] 00:24:44.794 thread=1 00:24:44.794 invalidate=1 00:24:44.794 rw=write 00:24:44.794 time_based=1 00:24:44.794 runtime=60 00:24:44.794 ioengine=libaio 00:24:44.794 direct=1 00:24:44.794 bs=4096 00:24:44.794 iodepth=1 00:24:44.794 norandommap=0 00:24:44.794 numjobs=1 00:24:44.794 00:24:44.794 verify_dump=1 00:24:44.794 verify_backlog=512 00:24:44.794 verify_state_save=0 00:24:44.794 do_verify=1 00:24:44.794 verify=crc32c-intel 00:24:44.794 [job0] 00:24:44.794 filename=/dev/nvme0n1 00:24:44.794 Could not set queue depth (nvme0n1) 00:24:45.053 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:45.053 fio-3.35 00:24:45.053 Starting 1 thread 00:24:48.403 13:54:50 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:48.403 13:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:48.403 13:54:50 -- common/autotest_common.sh@10 -- # set +x 00:24:48.403 true 00:24:48.403 13:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:48.403 13:54:50 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:48.403 13:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:48.403 13:54:50 -- common/autotest_common.sh@10 -- # set +x 00:24:48.403 true 00:24:48.403 13:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:48.403 13:54:50 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:48.403 13:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:48.404 13:54:50 -- common/autotest_common.sh@10 -- # set +x 00:24:48.404 true 00:24:48.404 13:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:48.404 13:54:50 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:48.404 13:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:48.404 13:54:50 -- common/autotest_common.sh@10 -- # set +x 00:24:48.404 true 00:24:48.404 13:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:48.404 13:54:50 -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:50.936 13:54:53 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:50.936 13:54:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.936 13:54:53 -- common/autotest_common.sh@10 -- # set +x 00:24:50.936 true 00:24:50.936 13:54:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.936 13:54:53 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:50.936 13:54:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.936 13:54:53 -- common/autotest_common.sh@10 -- # set +x 00:24:50.936 true 00:24:50.936 13:54:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.936 13:54:53 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:50.936 13:54:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.936 13:54:53 -- common/autotest_common.sh@10 -- # set +x 00:24:50.936 true 00:24:50.936 13:54:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.936 13:54:53 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:50.936 13:54:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.936 13:54:53 -- common/autotest_common.sh@10 -- # set +x 00:24:50.936 true 00:24:50.936 13:54:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.936 13:54:53 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:50.936 13:54:53 -- target/initiator_timeout.sh@54 -- # wait 1677566 00:25:47.164 00:25:47.164 job0: (groupid=0, jobs=1): err= 0: pid=1677809: Thu Jul 11 13:55:47 2024 00:25:47.164 read: IOPS=341, BW=1365KiB/s (1398kB/s)(80.0MiB/60000msec) 00:25:47.164 slat (usec): min=6, max=11539, avg= 8.46, stdev=101.52 00:25:47.164 clat (usec): min=262, max=41552k, avg=2667.35, stdev=290368.67 00:25:47.164 lat (usec): min=269, max=41552k, avg=2675.81, stdev=290368.77 00:25:47.164 clat percentiles (usec): 00:25:47.164 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 314], 00:25:47.164 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 330], 00:25:47.164 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 375], 95.00th=[ 396], 00:25:47.164 | 99.00th=[ 515], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:25:47.164 | 99.99th=[42206] 00:25:47.164 write: IOPS=342, BW=1370KiB/s (1403kB/s)(80.3MiB/60000msec); 0 zone resets 00:25:47.164 slat (usec): min=8, max=27416, avg=12.13, stdev=191.24 00:25:47.164 clat (usec): min=169, max=1697, avg=235.97, stdev=34.70 00:25:47.164 lat (usec): min=179, max=27731, avg=248.10, stdev=194.95 00:25:47.164 clat percentiles (usec): 00:25:47.164 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:25:47.164 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 231], 00:25:47.164 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 269], 95.00th=[ 330], 00:25:47.164 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 379], 99.95th=[ 392], 00:25:47.164 | 99.99th=[ 889] 00:25:47.164 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6301.54, stdev=1383.08, samples=26 00:25:47.164 iops : min= 1024, max= 2048, avg=1575.38, stdev=345.77, samples=26 00:25:47.164 lat (usec) : 250=42.92%, 500=56.14%, 750=0.53%, 1000=0.01% 00:25:47.164 lat (msec) : 2=0.02%, 4=0.01%, 50=0.37%, >=2000=0.01% 00:25:47.164 cpu : usr=0.40%, sys=0.72%, ctx=41036, majf=0, minf=2 00:25:47.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:47.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.164 issued rwts: total=20480,20548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:47.164 00:25:47.164 Run status group 0 (all jobs): 00:25:47.164 READ: bw=1365KiB/s (1398kB/s), 1365KiB/s-1365KiB/s (1398kB/s-1398kB/s), io=80.0MiB (83.9MB), run=60000-60000msec 00:25:47.164 WRITE: bw=1370KiB/s (1403kB/s), 1370KiB/s-1370KiB/s (1403kB/s-1403kB/s), io=80.3MiB (84.2MB), run=60000-60000msec 00:25:47.164 00:25:47.164 Disk stats (read/write): 00:25:47.164 nvme0n1: ios=20533/20480, merge=0/0, ticks=14158/4686, in_queue=18844, util=100.00% 00:25:47.164 13:55:47 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:47.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:47.164 13:55:47 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:47.164 13:55:47 -- common/autotest_common.sh@1198 -- # local i=0 00:25:47.164 13:55:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:47.164 13:55:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:47.164 13:55:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:47.164 13:55:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:47.164 13:55:47 -- common/autotest_common.sh@1210 -- # return 0 00:25:47.164 13:55:47 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:47.164 13:55:47 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:47.164 nvmf hotplug test: fio successful as expected 00:25:47.164 13:55:47 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.164 13:55:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.164 13:55:47 -- common/autotest_common.sh@10 -- # set +x 00:25:47.164 13:55:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:47.164 13:55:47 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:47.164 13:55:47 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:47.164 13:55:47 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:47.164 13:55:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:47.164 13:55:47 -- nvmf/common.sh@116 -- # sync 00:25:47.164 13:55:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:47.164 13:55:47 -- nvmf/common.sh@119 -- # set +e 00:25:47.164 13:55:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:47.164 13:55:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:47.164 rmmod nvme_tcp 00:25:47.164 rmmod nvme_fabrics 00:25:47.164 rmmod nvme_keyring 00:25:47.164 13:55:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:47.164 13:55:47 -- nvmf/common.sh@123 -- # set -e 00:25:47.164 13:55:47 -- nvmf/common.sh@124 -- # return 0 00:25:47.164 13:55:47 -- nvmf/common.sh@477 -- # '[' -n 1676881 ']' 00:25:47.164 13:55:47 -- nvmf/common.sh@478 -- # killprocess 1676881 00:25:47.164 13:55:47 -- common/autotest_common.sh@926 -- # '[' -z 1676881 ']' 00:25:47.164 13:55:47 -- common/autotest_common.sh@930 -- # kill -0 1676881 00:25:47.164 13:55:47 -- common/autotest_common.sh@931 -- # uname 00:25:47.164 13:55:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:47.164 13:55:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1676881 00:25:47.164 13:55:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:47.164 13:55:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:47.164 13:55:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1676881' 00:25:47.164 killing process with pid 1676881 00:25:47.164 13:55:47 -- common/autotest_common.sh@945 -- # kill 1676881 00:25:47.164 13:55:47 -- common/autotest_common.sh@950 -- # wait 1676881 00:25:47.164 13:55:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:47.164 13:55:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:47.164 13:55:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:47.164 13:55:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:47.164 13:55:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:47.164 13:55:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.164 13:55:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:47.164 13:55:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.732 13:55:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:47.732 00:25:47.732 real 1m12.386s 00:25:47.732 user 4m24.930s 00:25:47.732 sys 0m6.351s 00:25:47.732 13:55:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.732 13:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:47.732 ************************************ 00:25:47.732 END TEST nvmf_initiator_timeout 00:25:47.732 ************************************ 00:25:47.732 13:55:50 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:25:47.732 13:55:50 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:25:47.732 13:55:50 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:25:47.732 13:55:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:47.732 13:55:50 -- common/autotest_common.sh@10 -- # set +x 00:25:53.014 13:55:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:53.014 13:55:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:53.014 13:55:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:53.014 13:55:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:53.014 13:55:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:53.014 13:55:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:53.014 13:55:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:53.014 13:55:54 -- nvmf/common.sh@294 -- # net_devs=() 00:25:53.014 13:55:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:53.014 13:55:54 -- nvmf/common.sh@295 -- # e810=() 00:25:53.014 13:55:54 -- nvmf/common.sh@295 -- # local -ga e810 00:25:53.014 13:55:54 -- nvmf/common.sh@296 -- # x722=() 00:25:53.014 13:55:54 -- nvmf/common.sh@296 -- # local -ga x722 00:25:53.014 13:55:54 -- nvmf/common.sh@297 -- # mlx=() 00:25:53.014 13:55:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:53.014 13:55:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.014 13:55:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.014 13:55:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.014 13:55:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.014 13:55:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.014 13:55:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.014 13:55:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.014 13:55:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.014 13:55:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.014 13:55:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.014 13:55:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.014 13:55:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:53.014 13:55:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:53.014 13:55:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:53.014 13:55:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:53.014 13:55:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:53.014 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:53.014 13:55:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:53.014 13:55:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:53.014 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:53.014 13:55:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:53.014 13:55:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:53.014 13:55:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:53.014 13:55:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.014 13:55:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:53.014 13:55:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.014 13:55:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:53.014 Found net devices under 0000:86:00.0: cvl_0_0 00:25:53.014 13:55:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.014 13:55:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:53.014 13:55:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.014 13:55:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:53.014 13:55:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.014 13:55:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:53.014 Found net devices under 0000:86:00.1: cvl_0_1 00:25:53.014 13:55:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.014 13:55:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:53.014 13:55:54 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.014 13:55:54 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:25:53.014 13:55:54 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:53.014 13:55:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:53.014 13:55:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:53.014 13:55:54 -- common/autotest_common.sh@10 -- # set +x 00:25:53.014 ************************************ 00:25:53.014 START TEST nvmf_perf_adq 00:25:53.014 ************************************ 00:25:53.014 13:55:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:53.014 * Looking for test storage... 00:25:53.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:53.014 13:55:55 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.014 13:55:55 -- nvmf/common.sh@7 -- # uname -s 00:25:53.014 13:55:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.014 13:55:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.014 13:55:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.014 13:55:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.014 13:55:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.014 13:55:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.014 13:55:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.014 13:55:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.014 13:55:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.014 13:55:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.014 13:55:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:53.014 13:55:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:53.014 13:55:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.014 13:55:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.014 13:55:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.014 13:55:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.014 13:55:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.014 13:55:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.014 13:55:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.014 13:55:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.014 13:55:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.014 13:55:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.014 13:55:55 -- paths/export.sh@5 -- # export PATH 00:25:53.014 13:55:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.014 13:55:55 -- nvmf/common.sh@46 -- # : 0 00:25:53.014 13:55:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:53.014 13:55:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:53.014 13:55:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:53.014 13:55:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.014 13:55:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.014 13:55:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:53.014 13:55:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:53.014 13:55:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:53.014 13:55:55 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:53.014 13:55:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:53.014 13:55:55 -- common/autotest_common.sh@10 -- # set +x 00:25:58.285 13:56:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:58.285 13:56:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:58.285 13:56:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:58.285 13:56:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:58.285 13:56:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:58.285 13:56:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:58.285 13:56:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:58.285 13:56:00 -- nvmf/common.sh@294 -- # net_devs=() 00:25:58.285 13:56:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:58.285 13:56:00 -- nvmf/common.sh@295 -- # e810=() 00:25:58.285 13:56:00 -- nvmf/common.sh@295 -- # local -ga e810 00:25:58.285 13:56:00 -- nvmf/common.sh@296 -- # x722=() 00:25:58.285 13:56:00 -- nvmf/common.sh@296 -- # local -ga x722 00:25:58.285 13:56:00 -- nvmf/common.sh@297 -- # mlx=() 00:25:58.285 13:56:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:58.285 13:56:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.285 13:56:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.285 13:56:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.285 13:56:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.285 13:56:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.285 13:56:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.285 13:56:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.285 13:56:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.285 13:56:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.285 13:56:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.285 13:56:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.285 13:56:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:58.285 13:56:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:58.285 13:56:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:58.285 13:56:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:58.285 13:56:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:58.285 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:58.285 13:56:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:58.285 13:56:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:58.285 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:58.285 13:56:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:58.285 13:56:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:58.285 13:56:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:58.285 13:56:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.285 13:56:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:58.285 13:56:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.285 13:56:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:58.285 Found net devices under 0000:86:00.0: cvl_0_0 00:25:58.285 13:56:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.285 13:56:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:58.285 13:56:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.285 13:56:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:58.285 13:56:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.285 13:56:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:58.285 Found net devices under 0000:86:00.1: cvl_0_1 00:25:58.285 13:56:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.285 13:56:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:58.285 13:56:00 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.285 13:56:00 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:58.285 13:56:00 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:58.285 13:56:00 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:25:58.285 13:56:00 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:59.221 13:56:01 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:01.156 13:56:03 -- target/perf_adq.sh@54 -- # sleep 5 00:26:06.434 13:56:08 -- target/perf_adq.sh@67 -- # nvmftestinit 00:26:06.434 13:56:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:06.434 13:56:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.434 13:56:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:06.434 13:56:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:06.434 13:56:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:06.434 13:56:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.434 13:56:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.434 13:56:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.434 13:56:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:06.434 13:56:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:06.434 13:56:08 -- common/autotest_common.sh@10 -- # set +x 00:26:06.434 13:56:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:06.434 13:56:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:06.434 13:56:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:06.434 13:56:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:06.434 13:56:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:06.434 13:56:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:06.434 13:56:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:06.434 13:56:08 -- nvmf/common.sh@294 -- # net_devs=() 00:26:06.434 13:56:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:06.434 13:56:08 -- nvmf/common.sh@295 -- # e810=() 00:26:06.434 13:56:08 -- nvmf/common.sh@295 -- # local -ga e810 00:26:06.434 13:56:08 -- nvmf/common.sh@296 -- # x722=() 00:26:06.434 13:56:08 -- nvmf/common.sh@296 -- # local -ga x722 00:26:06.434 13:56:08 -- nvmf/common.sh@297 -- # mlx=() 00:26:06.434 13:56:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:06.434 13:56:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.434 13:56:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.434 13:56:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.434 13:56:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.434 13:56:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.434 13:56:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.434 13:56:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.434 13:56:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.434 13:56:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.434 13:56:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.434 13:56:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.434 13:56:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:06.434 13:56:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:06.434 13:56:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:06.434 13:56:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:06.434 13:56:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:06.434 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:06.434 13:56:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:06.434 13:56:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:06.434 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:06.434 13:56:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:06.434 13:56:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:06.434 13:56:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.434 13:56:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:06.434 13:56:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.434 13:56:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:06.434 Found net devices under 0000:86:00.0: cvl_0_0 00:26:06.434 13:56:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.434 13:56:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:06.434 13:56:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.434 13:56:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:06.434 13:56:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.434 13:56:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:06.434 Found net devices under 0000:86:00.1: cvl_0_1 00:26:06.434 13:56:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.434 13:56:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:06.434 13:56:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:06.434 13:56:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:06.434 13:56:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:06.434 13:56:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.434 13:56:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.434 13:56:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.434 13:56:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:06.434 13:56:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.434 13:56:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.434 13:56:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:06.434 13:56:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.434 13:56:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.434 13:56:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:06.434 13:56:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:06.434 13:56:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.434 13:56:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.434 13:56:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.434 13:56:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.434 13:56:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:06.434 13:56:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.435 13:56:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.435 13:56:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.435 13:56:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:06.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:26:06.435 00:26:06.435 --- 10.0.0.2 ping statistics --- 00:26:06.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.435 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:26:06.435 13:56:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:26:06.435 00:26:06.435 --- 10.0.0.1 ping statistics --- 00:26:06.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.435 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:26:06.435 13:56:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.435 13:56:08 -- nvmf/common.sh@410 -- # return 0 00:26:06.435 13:56:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:06.435 13:56:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.435 13:56:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:06.435 13:56:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:06.435 13:56:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.435 13:56:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:06.435 13:56:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:06.435 13:56:08 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:06.435 13:56:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:06.435 13:56:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:06.435 13:56:08 -- common/autotest_common.sh@10 -- # set +x 00:26:06.435 13:56:08 -- nvmf/common.sh@469 -- # nvmfpid=1695502 00:26:06.435 13:56:08 -- nvmf/common.sh@470 -- # waitforlisten 1695502 00:26:06.435 13:56:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:06.435 13:56:08 -- common/autotest_common.sh@819 -- # '[' -z 1695502 ']' 00:26:06.435 13:56:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.435 13:56:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:06.435 13:56:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.435 13:56:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:06.435 13:56:08 -- common/autotest_common.sh@10 -- # set +x 00:26:06.435 [2024-07-11 13:56:08.713289] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:06.435 [2024-07-11 13:56:08.713334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.435 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.435 [2024-07-11 13:56:08.769861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:06.435 [2024-07-11 13:56:08.809695] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:06.435 [2024-07-11 13:56:08.809803] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.435 [2024-07-11 13:56:08.809811] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.435 [2024-07-11 13:56:08.809818] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.435 [2024-07-11 13:56:08.809862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.435 [2024-07-11 13:56:08.809957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.435 [2024-07-11 13:56:08.810040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.435 [2024-07-11 13:56:08.810041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.435 13:56:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:06.435 13:56:08 -- common/autotest_common.sh@852 -- # return 0 00:26:06.435 13:56:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:06.435 13:56:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:06.435 13:56:08 -- common/autotest_common.sh@10 -- # set +x 00:26:06.435 13:56:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.435 13:56:08 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:26:06.435 13:56:08 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:06.435 13:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.435 13:56:08 -- common/autotest_common.sh@10 -- # set +x 00:26:06.435 13:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.435 13:56:08 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:06.435 13:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.435 13:56:08 -- common/autotest_common.sh@10 -- # set +x 00:26:06.695 13:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.695 13:56:08 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:06.695 13:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.695 13:56:08 -- common/autotest_common.sh@10 -- # set +x 00:26:06.695 [2024-07-11 13:56:08.980279] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.695 13:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.695 13:56:08 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:06.695 13:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.695 13:56:08 -- common/autotest_common.sh@10 -- # set +x 00:26:06.695 Malloc1 00:26:06.695 13:56:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.695 13:56:09 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:06.695 13:56:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.695 13:56:09 -- common/autotest_common.sh@10 -- # set +x 00:26:06.695 13:56:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.695 13:56:09 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:06.695 13:56:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.695 13:56:09 -- common/autotest_common.sh@10 -- # set +x 00:26:06.695 13:56:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.695 13:56:09 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.695 13:56:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.695 13:56:09 -- common/autotest_common.sh@10 -- # set +x 00:26:06.695 [2024-07-11 13:56:09.027922] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.695 13:56:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.695 13:56:09 -- target/perf_adq.sh@73 -- # perfpid=1695645 00:26:06.695 13:56:09 -- target/perf_adq.sh@74 -- # sleep 2 00:26:06.695 13:56:09 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:06.695 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.596 13:56:11 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:26:08.596 13:56:11 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:08.596 13:56:11 -- target/perf_adq.sh@76 -- # wc -l 00:26:08.596 13:56:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.596 13:56:11 -- common/autotest_common.sh@10 -- # set +x 00:26:08.854 13:56:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.854 13:56:11 -- target/perf_adq.sh@76 -- # count=4 00:26:08.854 13:56:11 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:26:08.854 13:56:11 -- target/perf_adq.sh@81 -- # wait 1695645 00:26:16.965 Initializing NVMe Controllers 00:26:16.965 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:16.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:16.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:16.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:16.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:16.965 Initialization complete. Launching workers. 00:26:16.965 ======================================================== 00:26:16.965 Latency(us) 00:26:16.965 Device Information : IOPS MiB/s Average min max 00:26:16.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11398.74 44.53 5615.09 940.29 9565.19 00:26:16.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11469.34 44.80 5580.97 1183.02 9672.69 00:26:16.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11195.04 43.73 5716.06 1028.18 9900.45 00:26:16.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11235.54 43.89 5696.74 980.23 10130.03 00:26:16.965 ======================================================== 00:26:16.965 Total : 45298.66 176.95 5651.65 940.29 10130.03 00:26:16.965 00:26:16.965 13:56:19 -- target/perf_adq.sh@82 -- # nvmftestfini 00:26:16.965 13:56:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:16.965 13:56:19 -- nvmf/common.sh@116 -- # sync 00:26:16.965 13:56:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:16.965 13:56:19 -- nvmf/common.sh@119 -- # set +e 00:26:16.965 13:56:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:16.965 13:56:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:16.965 rmmod nvme_tcp 00:26:16.965 rmmod nvme_fabrics 00:26:16.965 rmmod nvme_keyring 00:26:16.965 13:56:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:16.965 13:56:19 -- nvmf/common.sh@123 -- # set -e 00:26:16.965 13:56:19 -- nvmf/common.sh@124 -- # return 0 00:26:16.965 13:56:19 -- nvmf/common.sh@477 -- # '[' -n 1695502 ']' 00:26:16.965 13:56:19 -- nvmf/common.sh@478 -- # killprocess 1695502 00:26:16.965 13:56:19 -- common/autotest_common.sh@926 -- # '[' -z 1695502 ']' 00:26:16.965 13:56:19 -- common/autotest_common.sh@930 -- # kill -0 1695502 00:26:16.965 13:56:19 -- common/autotest_common.sh@931 -- # uname 00:26:16.965 13:56:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:16.965 13:56:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1695502 00:26:16.965 13:56:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:16.965 13:56:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:16.965 13:56:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1695502' 00:26:16.965 killing process with pid 1695502 00:26:16.965 13:56:19 -- common/autotest_common.sh@945 -- # kill 1695502 00:26:16.965 13:56:19 -- common/autotest_common.sh@950 -- # wait 1695502 00:26:17.224 13:56:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:17.224 13:56:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:17.224 13:56:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:17.224 13:56:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:17.224 13:56:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:17.224 13:56:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.225 13:56:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:17.225 13:56:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.127 13:56:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:19.127 13:56:21 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:26:19.127 13:56:21 -- target/perf_adq.sh@52 -- # rmmod ice 00:26:20.504 13:56:22 -- target/perf_adq.sh@53 -- # modprobe ice 00:26:22.410 13:56:24 -- target/perf_adq.sh@54 -- # sleep 5 00:26:27.683 13:56:29 -- target/perf_adq.sh@87 -- # nvmftestinit 00:26:27.683 13:56:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:27.683 13:56:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.683 13:56:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:27.683 13:56:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:27.683 13:56:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:27.683 13:56:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.683 13:56:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:27.683 13:56:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.683 13:56:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:27.683 13:56:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:27.683 13:56:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:27.683 13:56:29 -- common/autotest_common.sh@10 -- # set +x 00:26:27.683 13:56:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:27.683 13:56:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:27.683 13:56:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:27.683 13:56:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:27.683 13:56:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:27.683 13:56:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:27.683 13:56:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:27.683 13:56:29 -- nvmf/common.sh@294 -- # net_devs=() 00:26:27.683 13:56:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:27.683 13:56:29 -- nvmf/common.sh@295 -- # e810=() 00:26:27.683 13:56:29 -- nvmf/common.sh@295 -- # local -ga e810 00:26:27.683 13:56:29 -- nvmf/common.sh@296 -- # x722=() 00:26:27.683 13:56:29 -- nvmf/common.sh@296 -- # local -ga x722 00:26:27.683 13:56:29 -- nvmf/common.sh@297 -- # mlx=() 00:26:27.683 13:56:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:27.683 13:56:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.683 13:56:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.683 13:56:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.683 13:56:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.683 13:56:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.683 13:56:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.683 13:56:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.683 13:56:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.683 13:56:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.683 13:56:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.684 13:56:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.684 13:56:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:27.684 13:56:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:27.684 13:56:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:27.684 13:56:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:27.684 13:56:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:27.684 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:27.684 13:56:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:27.684 13:56:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:27.684 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:27.684 13:56:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:27.684 13:56:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:27.684 13:56:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.684 13:56:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:27.684 13:56:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.684 13:56:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:27.684 Found net devices under 0000:86:00.0: cvl_0_0 00:26:27.684 13:56:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.684 13:56:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:27.684 13:56:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.684 13:56:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:27.684 13:56:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.684 13:56:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:27.684 Found net devices under 0000:86:00.1: cvl_0_1 00:26:27.684 13:56:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.684 13:56:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:27.684 13:56:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:27.684 13:56:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:27.684 13:56:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.684 13:56:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.684 13:56:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.684 13:56:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:27.684 13:56:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.684 13:56:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.684 13:56:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:27.684 13:56:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.684 13:56:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.684 13:56:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:27.684 13:56:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:27.684 13:56:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.684 13:56:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.684 13:56:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.684 13:56:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.684 13:56:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:27.684 13:56:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.684 13:56:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.684 13:56:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.684 13:56:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:27.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:26:27.684 00:26:27.684 --- 10.0.0.2 ping statistics --- 00:26:27.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.684 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:26:27.684 13:56:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:26:27.684 00:26:27.684 --- 10.0.0.1 ping statistics --- 00:26:27.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.684 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:26:27.684 13:56:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.684 13:56:29 -- nvmf/common.sh@410 -- # return 0 00:26:27.684 13:56:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:27.684 13:56:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.684 13:56:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:27.684 13:56:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.684 13:56:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:27.684 13:56:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:27.684 13:56:29 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:26:27.684 13:56:29 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:27.684 13:56:29 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:27.684 13:56:29 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:27.684 net.core.busy_poll = 1 00:26:27.684 13:56:29 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:27.684 net.core.busy_read = 1 00:26:27.684 13:56:29 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:27.684 13:56:29 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:27.684 13:56:30 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:27.944 13:56:30 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:27.944 13:56:30 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:27.944 13:56:30 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:27.944 13:56:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:27.944 13:56:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:27.944 13:56:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.944 13:56:30 -- nvmf/common.sh@469 -- # nvmfpid=1699371 00:26:27.944 13:56:30 -- nvmf/common.sh@470 -- # waitforlisten 1699371 00:26:27.944 13:56:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:27.944 13:56:30 -- common/autotest_common.sh@819 -- # '[' -z 1699371 ']' 00:26:27.944 13:56:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.944 13:56:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:27.944 13:56:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.944 13:56:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:27.944 13:56:30 -- common/autotest_common.sh@10 -- # set +x 00:26:27.944 [2024-07-11 13:56:30.260230] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:27.944 [2024-07-11 13:56:30.260283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.944 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.944 [2024-07-11 13:56:30.319881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:27.944 [2024-07-11 13:56:30.360464] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:27.944 [2024-07-11 13:56:30.360571] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.944 [2024-07-11 13:56:30.360579] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.944 [2024-07-11 13:56:30.360585] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.944 [2024-07-11 13:56:30.360622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.944 [2024-07-11 13:56:30.360719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.944 [2024-07-11 13:56:30.360737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:27.944 [2024-07-11 13:56:30.360741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.927 13:56:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:28.927 13:56:31 -- common/autotest_common.sh@852 -- # return 0 00:26:28.927 13:56:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:28.927 13:56:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:28.927 13:56:31 -- common/autotest_common.sh@10 -- # set +x 00:26:28.927 13:56:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.927 13:56:31 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:26:28.927 13:56:31 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:28.927 13:56:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.927 13:56:31 -- common/autotest_common.sh@10 -- # set +x 00:26:28.927 13:56:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.927 13:56:31 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:28.927 13:56:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.927 13:56:31 -- common/autotest_common.sh@10 -- # set +x 00:26:28.927 13:56:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.927 13:56:31 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:28.927 13:56:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.927 13:56:31 -- common/autotest_common.sh@10 -- # set +x 00:26:28.927 [2024-07-11 13:56:31.198890] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.927 13:56:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.927 13:56:31 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:28.927 13:56:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.927 13:56:31 -- common/autotest_common.sh@10 -- # set +x 00:26:28.927 Malloc1 00:26:28.927 13:56:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.927 13:56:31 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:28.928 13:56:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.928 13:56:31 -- common/autotest_common.sh@10 -- # set +x 00:26:28.928 13:56:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.928 13:56:31 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:28.928 13:56:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.928 13:56:31 -- common/autotest_common.sh@10 -- # set +x 00:26:28.928 13:56:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.928 13:56:31 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:28.928 13:56:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.928 13:56:31 -- common/autotest_common.sh@10 -- # set +x 00:26:28.928 [2024-07-11 13:56:31.246634] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.928 13:56:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.928 13:56:31 -- target/perf_adq.sh@94 -- # perfpid=1699629 00:26:28.928 13:56:31 -- target/perf_adq.sh@95 -- # sleep 2 00:26:28.928 13:56:31 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:28.928 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.830 13:56:33 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:26:30.830 13:56:33 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:30.830 13:56:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.830 13:56:33 -- target/perf_adq.sh@97 -- # wc -l 00:26:30.830 13:56:33 -- common/autotest_common.sh@10 -- # set +x 00:26:30.830 13:56:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:31.088 13:56:33 -- target/perf_adq.sh@97 -- # count=2 00:26:31.088 13:56:33 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:26:31.088 13:56:33 -- target/perf_adq.sh@103 -- # wait 1699629 00:26:39.210 Initializing NVMe Controllers 00:26:39.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:39.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:39.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:39.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:39.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:39.210 Initialization complete. Launching workers. 00:26:39.210 ======================================================== 00:26:39.210 Latency(us) 00:26:39.210 Device Information : IOPS MiB/s Average min max 00:26:39.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6056.20 23.66 10568.09 1568.89 57743.83 00:26:39.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5453.30 21.30 11777.42 1301.36 57952.31 00:26:39.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5809.00 22.69 11050.64 1524.38 57812.38 00:26:39.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15607.20 60.97 4100.30 1061.22 45528.81 00:26:39.210 ======================================================== 00:26:39.210 Total : 32925.70 128.62 7787.71 1061.22 57952.31 00:26:39.210 00:26:39.210 13:56:41 -- target/perf_adq.sh@104 -- # nvmftestfini 00:26:39.210 13:56:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:39.210 13:56:41 -- nvmf/common.sh@116 -- # sync 00:26:39.210 13:56:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:39.210 13:56:41 -- nvmf/common.sh@119 -- # set +e 00:26:39.210 13:56:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:39.210 13:56:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:39.210 rmmod nvme_tcp 00:26:39.210 rmmod nvme_fabrics 00:26:39.210 rmmod nvme_keyring 00:26:39.210 13:56:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:39.210 13:56:41 -- nvmf/common.sh@123 -- # set -e 00:26:39.210 13:56:41 -- nvmf/common.sh@124 -- # return 0 00:26:39.210 13:56:41 -- nvmf/common.sh@477 -- # '[' -n 1699371 ']' 00:26:39.210 13:56:41 -- nvmf/common.sh@478 -- # killprocess 1699371 00:26:39.210 13:56:41 -- common/autotest_common.sh@926 -- # '[' -z 1699371 ']' 00:26:39.210 13:56:41 -- common/autotest_common.sh@930 -- # kill -0 1699371 00:26:39.210 13:56:41 -- common/autotest_common.sh@931 -- # uname 00:26:39.210 13:56:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:39.210 13:56:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1699371 00:26:39.210 13:56:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:39.210 13:56:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:39.210 13:56:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1699371' 00:26:39.210 killing process with pid 1699371 00:26:39.210 13:56:41 -- common/autotest_common.sh@945 -- # kill 1699371 00:26:39.210 13:56:41 -- common/autotest_common.sh@950 -- # wait 1699371 00:26:39.469 13:56:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:39.469 13:56:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:39.469 13:56:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:39.469 13:56:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:39.469 13:56:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:39.469 13:56:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.469 13:56:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:39.469 13:56:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.753 13:56:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:42.753 13:56:44 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:26:42.753 00:26:42.753 real 0m49.812s 00:26:42.753 user 2m45.991s 00:26:42.753 sys 0m9.495s 00:26:42.753 13:56:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:42.753 13:56:44 -- common/autotest_common.sh@10 -- # set +x 00:26:42.753 ************************************ 00:26:42.753 END TEST nvmf_perf_adq 00:26:42.753 ************************************ 00:26:42.753 13:56:44 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:42.753 13:56:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:42.753 13:56:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:42.753 13:56:44 -- common/autotest_common.sh@10 -- # set +x 00:26:42.753 ************************************ 00:26:42.753 START TEST nvmf_shutdown 00:26:42.753 ************************************ 00:26:42.753 13:56:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:42.753 * Looking for test storage... 00:26:42.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:42.753 13:56:44 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.753 13:56:44 -- nvmf/common.sh@7 -- # uname -s 00:26:42.753 13:56:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.753 13:56:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.753 13:56:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.753 13:56:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.753 13:56:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.753 13:56:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.753 13:56:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.753 13:56:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.753 13:56:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.753 13:56:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.753 13:56:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:42.753 13:56:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:42.753 13:56:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.753 13:56:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.753 13:56:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.753 13:56:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.753 13:56:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.753 13:56:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.753 13:56:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.753 13:56:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.753 13:56:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.753 13:56:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.753 13:56:44 -- paths/export.sh@5 -- # export PATH 00:26:42.753 13:56:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.753 13:56:44 -- nvmf/common.sh@46 -- # : 0 00:26:42.753 13:56:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:42.753 13:56:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:42.753 13:56:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:42.753 13:56:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.753 13:56:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.753 13:56:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:42.753 13:56:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:42.753 13:56:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:42.753 13:56:44 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:42.753 13:56:44 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:42.753 13:56:44 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:42.753 13:56:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:42.753 13:56:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:42.753 13:56:44 -- common/autotest_common.sh@10 -- # set +x 00:26:42.753 ************************************ 00:26:42.753 START TEST nvmf_shutdown_tc1 00:26:42.753 ************************************ 00:26:42.753 13:56:44 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:26:42.753 13:56:44 -- target/shutdown.sh@74 -- # starttarget 00:26:42.753 13:56:44 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:42.753 13:56:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:42.753 13:56:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.753 13:56:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:42.753 13:56:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:42.753 13:56:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:42.753 13:56:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.753 13:56:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.753 13:56:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.753 13:56:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:42.753 13:56:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:42.753 13:56:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:42.753 13:56:44 -- common/autotest_common.sh@10 -- # set +x 00:26:48.026 13:56:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:48.026 13:56:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:48.026 13:56:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:48.026 13:56:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:48.026 13:56:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:48.026 13:56:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:48.026 13:56:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:48.026 13:56:49 -- nvmf/common.sh@294 -- # net_devs=() 00:26:48.026 13:56:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:48.026 13:56:49 -- nvmf/common.sh@295 -- # e810=() 00:26:48.026 13:56:49 -- nvmf/common.sh@295 -- # local -ga e810 00:26:48.026 13:56:49 -- nvmf/common.sh@296 -- # x722=() 00:26:48.026 13:56:49 -- nvmf/common.sh@296 -- # local -ga x722 00:26:48.027 13:56:49 -- nvmf/common.sh@297 -- # mlx=() 00:26:48.027 13:56:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:48.027 13:56:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.027 13:56:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.027 13:56:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.027 13:56:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.027 13:56:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.027 13:56:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.027 13:56:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.027 13:56:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.027 13:56:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.027 13:56:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.027 13:56:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.027 13:56:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:48.027 13:56:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:48.027 13:56:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:48.027 13:56:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:48.027 13:56:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:48.027 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:48.027 13:56:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:48.027 13:56:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:48.027 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:48.027 13:56:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:48.027 13:56:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:48.027 13:56:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.027 13:56:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:48.027 13:56:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.027 13:56:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:48.027 Found net devices under 0000:86:00.0: cvl_0_0 00:26:48.027 13:56:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.027 13:56:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:48.027 13:56:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.027 13:56:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:48.027 13:56:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.027 13:56:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:48.027 Found net devices under 0000:86:00.1: cvl_0_1 00:26:48.027 13:56:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.027 13:56:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:48.027 13:56:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:48.027 13:56:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:48.027 13:56:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:48.027 13:56:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.027 13:56:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.027 13:56:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.027 13:56:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:48.027 13:56:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.027 13:56:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.027 13:56:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:48.027 13:56:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.027 13:56:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.027 13:56:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:48.027 13:56:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:48.027 13:56:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.027 13:56:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.027 13:56:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.027 13:56:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.027 13:56:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:48.027 13:56:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.027 13:56:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.027 13:56:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.027 13:56:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:48.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:26:48.027 00:26:48.027 --- 10.0.0.2 ping statistics --- 00:26:48.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.027 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:26:48.027 13:56:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:26:48.027 00:26:48.027 --- 10.0.0.1 ping statistics --- 00:26:48.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.027 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:26:48.027 13:56:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.027 13:56:50 -- nvmf/common.sh@410 -- # return 0 00:26:48.027 13:56:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:48.027 13:56:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.027 13:56:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:48.027 13:56:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:48.027 13:56:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.027 13:56:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:48.027 13:56:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:48.027 13:56:50 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:48.027 13:56:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:48.027 13:56:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:48.027 13:56:50 -- common/autotest_common.sh@10 -- # set +x 00:26:48.027 13:56:50 -- nvmf/common.sh@469 -- # nvmfpid=1704886 00:26:48.027 13:56:50 -- nvmf/common.sh@470 -- # waitforlisten 1704886 00:26:48.027 13:56:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:48.027 13:56:50 -- common/autotest_common.sh@819 -- # '[' -z 1704886 ']' 00:26:48.027 13:56:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.027 13:56:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:48.027 13:56:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.027 13:56:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:48.027 13:56:50 -- common/autotest_common.sh@10 -- # set +x 00:26:48.027 [2024-07-11 13:56:50.309738] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:48.027 [2024-07-11 13:56:50.309786] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.027 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.027 [2024-07-11 13:56:50.367311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:48.027 [2024-07-11 13:56:50.406000] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:48.027 [2024-07-11 13:56:50.406126] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.027 [2024-07-11 13:56:50.406134] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.027 [2024-07-11 13:56:50.406141] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.027 [2024-07-11 13:56:50.406249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.027 [2024-07-11 13:56:50.406337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.027 [2024-07-11 13:56:50.406449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.027 [2024-07-11 13:56:50.406450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:48.973 13:56:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:48.974 13:56:51 -- common/autotest_common.sh@852 -- # return 0 00:26:48.974 13:56:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:48.974 13:56:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:48.974 13:56:51 -- common/autotest_common.sh@10 -- # set +x 00:26:48.974 13:56:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.974 13:56:51 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:48.974 13:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.974 13:56:51 -- common/autotest_common.sh@10 -- # set +x 00:26:48.974 [2024-07-11 13:56:51.170620] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.974 13:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:48.974 13:56:51 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:48.974 13:56:51 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:48.974 13:56:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:48.974 13:56:51 -- common/autotest_common.sh@10 -- # set +x 00:26:48.974 13:56:51 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:48.974 13:56:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:48.974 13:56:51 -- target/shutdown.sh@28 -- # cat 00:26:48.974 13:56:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:48.974 13:56:51 -- target/shutdown.sh@28 -- # cat 00:26:48.974 13:56:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:48.974 13:56:51 -- target/shutdown.sh@28 -- # cat 00:26:48.974 13:56:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:48.974 13:56:51 -- target/shutdown.sh@28 -- # cat 00:26:48.974 13:56:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:48.974 13:56:51 -- target/shutdown.sh@28 -- # cat 00:26:48.974 13:56:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:48.974 13:56:51 -- target/shutdown.sh@28 -- # cat 00:26:48.974 13:56:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:48.974 13:56:51 -- target/shutdown.sh@28 -- # cat 00:26:48.974 13:56:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:48.974 13:56:51 -- target/shutdown.sh@28 -- # cat 00:26:48.974 13:56:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:48.974 13:56:51 -- target/shutdown.sh@28 -- # cat 00:26:48.974 13:56:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:48.974 13:56:51 -- target/shutdown.sh@28 -- # cat 00:26:48.974 13:56:51 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:48.974 13:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.974 13:56:51 -- common/autotest_common.sh@10 -- # set +x 00:26:48.974 Malloc1 00:26:48.974 [2024-07-11 13:56:51.266433] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.974 Malloc2 00:26:48.974 Malloc3 00:26:48.974 Malloc4 00:26:48.974 Malloc5 00:26:49.233 Malloc6 00:26:49.233 Malloc7 00:26:49.233 Malloc8 00:26:49.233 Malloc9 00:26:49.233 Malloc10 00:26:49.233 13:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:49.233 13:56:51 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:49.233 13:56:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:49.233 13:56:51 -- common/autotest_common.sh@10 -- # set +x 00:26:49.493 13:56:51 -- target/shutdown.sh@78 -- # perfpid=1705178 00:26:49.493 13:56:51 -- target/shutdown.sh@79 -- # waitforlisten 1705178 /var/tmp/bdevperf.sock 00:26:49.493 13:56:51 -- common/autotest_common.sh@819 -- # '[' -z 1705178 ']' 00:26:49.493 13:56:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:49.493 13:56:51 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:49.493 13:56:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:49.493 13:56:51 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:49.493 13:56:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:49.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:49.493 13:56:51 -- nvmf/common.sh@520 -- # config=() 00:26:49.493 13:56:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:49.493 13:56:51 -- nvmf/common.sh@520 -- # local subsystem config 00:26:49.493 13:56:51 -- common/autotest_common.sh@10 -- # set +x 00:26:49.494 13:56:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.494 { 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme$subsystem", 00:26:49.494 "trtype": "$TEST_TRANSPORT", 00:26:49.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "$NVMF_PORT", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.494 "hdgst": ${hdgst:-false}, 00:26:49.494 "ddgst": ${ddgst:-false} 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 } 00:26:49.494 EOF 00:26:49.494 )") 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # cat 00:26:49.494 13:56:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.494 { 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme$subsystem", 00:26:49.494 "trtype": "$TEST_TRANSPORT", 00:26:49.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "$NVMF_PORT", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.494 "hdgst": ${hdgst:-false}, 00:26:49.494 "ddgst": ${ddgst:-false} 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 } 00:26:49.494 EOF 00:26:49.494 )") 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # cat 00:26:49.494 13:56:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.494 { 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme$subsystem", 00:26:49.494 "trtype": "$TEST_TRANSPORT", 00:26:49.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "$NVMF_PORT", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.494 "hdgst": ${hdgst:-false}, 00:26:49.494 "ddgst": ${ddgst:-false} 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 } 00:26:49.494 EOF 00:26:49.494 )") 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # cat 00:26:49.494 13:56:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.494 { 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme$subsystem", 00:26:49.494 "trtype": "$TEST_TRANSPORT", 00:26:49.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "$NVMF_PORT", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.494 "hdgst": ${hdgst:-false}, 00:26:49.494 "ddgst": ${ddgst:-false} 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 } 00:26:49.494 EOF 00:26:49.494 )") 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # cat 00:26:49.494 13:56:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.494 { 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme$subsystem", 00:26:49.494 "trtype": "$TEST_TRANSPORT", 00:26:49.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "$NVMF_PORT", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.494 "hdgst": ${hdgst:-false}, 00:26:49.494 "ddgst": ${ddgst:-false} 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 } 00:26:49.494 EOF 00:26:49.494 )") 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # cat 00:26:49.494 13:56:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.494 { 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme$subsystem", 00:26:49.494 "trtype": "$TEST_TRANSPORT", 00:26:49.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "$NVMF_PORT", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.494 "hdgst": ${hdgst:-false}, 00:26:49.494 "ddgst": ${ddgst:-false} 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 } 00:26:49.494 EOF 00:26:49.494 )") 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # cat 00:26:49.494 [2024-07-11 13:56:51.737136] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:49.494 [2024-07-11 13:56:51.737189] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:49.494 13:56:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.494 { 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme$subsystem", 00:26:49.494 "trtype": "$TEST_TRANSPORT", 00:26:49.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "$NVMF_PORT", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.494 "hdgst": ${hdgst:-false}, 00:26:49.494 "ddgst": ${ddgst:-false} 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 } 00:26:49.494 EOF 00:26:49.494 )") 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # cat 00:26:49.494 13:56:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.494 { 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme$subsystem", 00:26:49.494 "trtype": "$TEST_TRANSPORT", 00:26:49.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "$NVMF_PORT", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.494 "hdgst": ${hdgst:-false}, 00:26:49.494 "ddgst": ${ddgst:-false} 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 } 00:26:49.494 EOF 00:26:49.494 )") 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # cat 00:26:49.494 13:56:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.494 { 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme$subsystem", 00:26:49.494 "trtype": "$TEST_TRANSPORT", 00:26:49.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "$NVMF_PORT", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.494 "hdgst": ${hdgst:-false}, 00:26:49.494 "ddgst": ${ddgst:-false} 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 } 00:26:49.494 EOF 00:26:49.494 )") 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # cat 00:26:49.494 13:56:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:49.494 { 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme$subsystem", 00:26:49.494 "trtype": "$TEST_TRANSPORT", 00:26:49.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "$NVMF_PORT", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.494 "hdgst": ${hdgst:-false}, 00:26:49.494 "ddgst": ${ddgst:-false} 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 } 00:26:49.494 EOF 00:26:49.494 )") 00:26:49.494 13:56:51 -- nvmf/common.sh@542 -- # cat 00:26:49.494 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.494 13:56:51 -- nvmf/common.sh@544 -- # jq . 00:26:49.494 13:56:51 -- nvmf/common.sh@545 -- # IFS=, 00:26:49.494 13:56:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme1", 00:26:49.494 "trtype": "tcp", 00:26:49.494 "traddr": "10.0.0.2", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "4420", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:49.494 "hdgst": false, 00:26:49.494 "ddgst": false 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 },{ 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme2", 00:26:49.494 "trtype": "tcp", 00:26:49.494 "traddr": "10.0.0.2", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "4420", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:49.494 "hdgst": false, 00:26:49.494 "ddgst": false 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 },{ 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme3", 00:26:49.494 "trtype": "tcp", 00:26:49.494 "traddr": "10.0.0.2", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "4420", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:49.494 "hdgst": false, 00:26:49.494 "ddgst": false 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.494 },{ 00:26:49.494 "params": { 00:26:49.494 "name": "Nvme4", 00:26:49.494 "trtype": "tcp", 00:26:49.494 "traddr": "10.0.0.2", 00:26:49.494 "adrfam": "ipv4", 00:26:49.494 "trsvcid": "4420", 00:26:49.494 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:49.494 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:49.494 "hdgst": false, 00:26:49.494 "ddgst": false 00:26:49.494 }, 00:26:49.494 "method": "bdev_nvme_attach_controller" 00:26:49.495 },{ 00:26:49.495 "params": { 00:26:49.495 "name": "Nvme5", 00:26:49.495 "trtype": "tcp", 00:26:49.495 "traddr": "10.0.0.2", 00:26:49.495 "adrfam": "ipv4", 00:26:49.495 "trsvcid": "4420", 00:26:49.495 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:49.495 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:49.495 "hdgst": false, 00:26:49.495 "ddgst": false 00:26:49.495 }, 00:26:49.495 "method": "bdev_nvme_attach_controller" 00:26:49.495 },{ 00:26:49.495 "params": { 00:26:49.495 "name": "Nvme6", 00:26:49.495 "trtype": "tcp", 00:26:49.495 "traddr": "10.0.0.2", 00:26:49.495 "adrfam": "ipv4", 00:26:49.495 "trsvcid": "4420", 00:26:49.495 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:49.495 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:49.495 "hdgst": false, 00:26:49.495 "ddgst": false 00:26:49.495 }, 00:26:49.495 "method": "bdev_nvme_attach_controller" 00:26:49.495 },{ 00:26:49.495 "params": { 00:26:49.495 "name": "Nvme7", 00:26:49.495 "trtype": "tcp", 00:26:49.495 "traddr": "10.0.0.2", 00:26:49.495 "adrfam": "ipv4", 00:26:49.495 "trsvcid": "4420", 00:26:49.495 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:49.495 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:49.495 "hdgst": false, 00:26:49.495 "ddgst": false 00:26:49.495 }, 00:26:49.495 "method": "bdev_nvme_attach_controller" 00:26:49.495 },{ 00:26:49.495 "params": { 00:26:49.495 "name": "Nvme8", 00:26:49.495 "trtype": "tcp", 00:26:49.495 "traddr": "10.0.0.2", 00:26:49.495 "adrfam": "ipv4", 00:26:49.495 "trsvcid": "4420", 00:26:49.495 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:49.495 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:49.495 "hdgst": false, 00:26:49.495 "ddgst": false 00:26:49.495 }, 00:26:49.495 "method": "bdev_nvme_attach_controller" 00:26:49.495 },{ 00:26:49.495 "params": { 00:26:49.495 "name": "Nvme9", 00:26:49.495 "trtype": "tcp", 00:26:49.495 "traddr": "10.0.0.2", 00:26:49.495 "adrfam": "ipv4", 00:26:49.495 "trsvcid": "4420", 00:26:49.495 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:49.495 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:49.495 "hdgst": false, 00:26:49.495 "ddgst": false 00:26:49.495 }, 00:26:49.495 "method": "bdev_nvme_attach_controller" 00:26:49.495 },{ 00:26:49.495 "params": { 00:26:49.495 "name": "Nvme10", 00:26:49.495 "trtype": "tcp", 00:26:49.495 "traddr": "10.0.0.2", 00:26:49.495 "adrfam": "ipv4", 00:26:49.495 "trsvcid": "4420", 00:26:49.495 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:49.495 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:49.495 "hdgst": false, 00:26:49.495 "ddgst": false 00:26:49.495 }, 00:26:49.495 "method": "bdev_nvme_attach_controller" 00:26:49.495 }' 00:26:49.495 [2024-07-11 13:56:51.794110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.495 [2024-07-11 13:56:51.831583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.871 13:56:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:50.871 13:56:53 -- common/autotest_common.sh@852 -- # return 0 00:26:50.871 13:56:53 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:50.871 13:56:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.871 13:56:53 -- common/autotest_common.sh@10 -- # set +x 00:26:50.871 13:56:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.871 13:56:53 -- target/shutdown.sh@83 -- # kill -9 1705178 00:26:50.871 13:56:53 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:50.871 13:56:53 -- target/shutdown.sh@87 -- # sleep 1 00:26:51.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1705178 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:51.805 13:56:54 -- target/shutdown.sh@88 -- # kill -0 1704886 00:26:51.805 13:56:54 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:51.805 13:56:54 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:51.805 13:56:54 -- nvmf/common.sh@520 -- # config=() 00:26:51.805 13:56:54 -- nvmf/common.sh@520 -- # local subsystem config 00:26:51.805 13:56:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:51.805 13:56:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:51.805 { 00:26:51.805 "params": { 00:26:51.805 "name": "Nvme$subsystem", 00:26:51.805 "trtype": "$TEST_TRANSPORT", 00:26:51.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.805 "adrfam": "ipv4", 00:26:51.805 "trsvcid": "$NVMF_PORT", 00:26:51.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.805 "hdgst": ${hdgst:-false}, 00:26:51.805 "ddgst": ${ddgst:-false} 00:26:51.805 }, 00:26:51.805 "method": "bdev_nvme_attach_controller" 00:26:51.805 } 00:26:51.805 EOF 00:26:51.805 )") 00:26:51.805 13:56:54 -- nvmf/common.sh@542 -- # cat 00:26:51.805 13:56:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:51.805 13:56:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:51.805 { 00:26:51.805 "params": { 00:26:51.805 "name": "Nvme$subsystem", 00:26:51.805 "trtype": "$TEST_TRANSPORT", 00:26:51.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.805 "adrfam": "ipv4", 00:26:51.805 "trsvcid": "$NVMF_PORT", 00:26:51.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.805 "hdgst": ${hdgst:-false}, 00:26:51.805 "ddgst": ${ddgst:-false} 00:26:51.805 }, 00:26:51.805 "method": "bdev_nvme_attach_controller" 00:26:51.805 } 00:26:51.805 EOF 00:26:51.805 )") 00:26:51.805 13:56:54 -- nvmf/common.sh@542 -- # cat 00:26:51.805 13:56:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:51.805 13:56:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:51.805 { 00:26:51.805 "params": { 00:26:51.805 "name": "Nvme$subsystem", 00:26:51.805 "trtype": "$TEST_TRANSPORT", 00:26:51.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.805 "adrfam": "ipv4", 00:26:51.805 "trsvcid": "$NVMF_PORT", 00:26:51.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.805 "hdgst": ${hdgst:-false}, 00:26:51.805 "ddgst": ${ddgst:-false} 00:26:51.805 }, 00:26:51.805 "method": "bdev_nvme_attach_controller" 00:26:51.805 } 00:26:51.805 EOF 00:26:51.805 )") 00:26:51.805 13:56:54 -- nvmf/common.sh@542 -- # cat 00:26:51.805 13:56:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:51.805 13:56:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:51.805 { 00:26:51.805 "params": { 00:26:51.805 "name": "Nvme$subsystem", 00:26:51.805 "trtype": "$TEST_TRANSPORT", 00:26:51.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.805 "adrfam": "ipv4", 00:26:51.805 "trsvcid": "$NVMF_PORT", 00:26:51.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.805 "hdgst": ${hdgst:-false}, 00:26:51.805 "ddgst": ${ddgst:-false} 00:26:51.805 }, 00:26:51.805 "method": "bdev_nvme_attach_controller" 00:26:51.805 } 00:26:51.805 EOF 00:26:51.805 )") 00:26:51.805 13:56:54 -- nvmf/common.sh@542 -- # cat 00:26:51.805 13:56:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:51.805 13:56:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:51.805 { 00:26:51.805 "params": { 00:26:51.805 "name": "Nvme$subsystem", 00:26:51.805 "trtype": "$TEST_TRANSPORT", 00:26:51.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:51.805 "adrfam": "ipv4", 00:26:51.805 "trsvcid": "$NVMF_PORT", 00:26:51.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:51.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:51.805 "hdgst": ${hdgst:-false}, 00:26:51.805 "ddgst": ${ddgst:-false} 00:26:51.805 }, 00:26:51.805 "method": "bdev_nvme_attach_controller" 00:26:51.805 } 00:26:51.805 EOF 00:26:51.805 )") 00:26:52.064 13:56:54 -- nvmf/common.sh@542 -- # cat 00:26:52.064 13:56:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:52.064 13:56:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:52.064 { 00:26:52.064 "params": { 00:26:52.064 "name": "Nvme$subsystem", 00:26:52.064 "trtype": "$TEST_TRANSPORT", 00:26:52.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.064 "adrfam": "ipv4", 00:26:52.064 "trsvcid": "$NVMF_PORT", 00:26:52.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.064 "hdgst": ${hdgst:-false}, 00:26:52.064 "ddgst": ${ddgst:-false} 00:26:52.064 }, 00:26:52.064 "method": "bdev_nvme_attach_controller" 00:26:52.064 } 00:26:52.064 EOF 00:26:52.064 )") 00:26:52.064 13:56:54 -- nvmf/common.sh@542 -- # cat 00:26:52.064 13:56:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:52.064 13:56:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:52.064 { 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme$subsystem", 00:26:52.065 "trtype": "$TEST_TRANSPORT", 00:26:52.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "$NVMF_PORT", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.065 "hdgst": ${hdgst:-false}, 00:26:52.065 "ddgst": ${ddgst:-false} 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 } 00:26:52.065 EOF 00:26:52.065 )") 00:26:52.065 13:56:54 -- nvmf/common.sh@542 -- # cat 00:26:52.065 [2024-07-11 13:56:54.276784] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:52.065 [2024-07-11 13:56:54.276835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705664 ] 00:26:52.065 13:56:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:52.065 13:56:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:52.065 { 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme$subsystem", 00:26:52.065 "trtype": "$TEST_TRANSPORT", 00:26:52.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "$NVMF_PORT", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.065 "hdgst": ${hdgst:-false}, 00:26:52.065 "ddgst": ${ddgst:-false} 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 } 00:26:52.065 EOF 00:26:52.065 )") 00:26:52.065 13:56:54 -- nvmf/common.sh@542 -- # cat 00:26:52.065 13:56:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:52.065 13:56:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:52.065 { 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme$subsystem", 00:26:52.065 "trtype": "$TEST_TRANSPORT", 00:26:52.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "$NVMF_PORT", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.065 "hdgst": ${hdgst:-false}, 00:26:52.065 "ddgst": ${ddgst:-false} 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 } 00:26:52.065 EOF 00:26:52.065 )") 00:26:52.065 13:56:54 -- nvmf/common.sh@542 -- # cat 00:26:52.065 13:56:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:52.065 13:56:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:52.065 { 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme$subsystem", 00:26:52.065 "trtype": "$TEST_TRANSPORT", 00:26:52.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "$NVMF_PORT", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.065 "hdgst": ${hdgst:-false}, 00:26:52.065 "ddgst": ${ddgst:-false} 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 } 00:26:52.065 EOF 00:26:52.065 )") 00:26:52.065 13:56:54 -- nvmf/common.sh@542 -- # cat 00:26:52.065 13:56:54 -- nvmf/common.sh@544 -- # jq . 00:26:52.065 13:56:54 -- nvmf/common.sh@545 -- # IFS=, 00:26:52.065 13:56:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme1", 00:26:52.065 "trtype": "tcp", 00:26:52.065 "traddr": "10.0.0.2", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "4420", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:52.065 "hdgst": false, 00:26:52.065 "ddgst": false 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 },{ 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme2", 00:26:52.065 "trtype": "tcp", 00:26:52.065 "traddr": "10.0.0.2", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "4420", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:52.065 "hdgst": false, 00:26:52.065 "ddgst": false 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 },{ 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme3", 00:26:52.065 "trtype": "tcp", 00:26:52.065 "traddr": "10.0.0.2", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "4420", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:52.065 "hdgst": false, 00:26:52.065 "ddgst": false 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 },{ 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme4", 00:26:52.065 "trtype": "tcp", 00:26:52.065 "traddr": "10.0.0.2", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "4420", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:52.065 "hdgst": false, 00:26:52.065 "ddgst": false 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 },{ 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme5", 00:26:52.065 "trtype": "tcp", 00:26:52.065 "traddr": "10.0.0.2", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "4420", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:52.065 "hdgst": false, 00:26:52.065 "ddgst": false 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 },{ 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme6", 00:26:52.065 "trtype": "tcp", 00:26:52.065 "traddr": "10.0.0.2", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "4420", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:52.065 "hdgst": false, 00:26:52.065 "ddgst": false 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 },{ 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme7", 00:26:52.065 "trtype": "tcp", 00:26:52.065 "traddr": "10.0.0.2", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "4420", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:52.065 "hdgst": false, 00:26:52.065 "ddgst": false 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 },{ 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme8", 00:26:52.065 "trtype": "tcp", 00:26:52.065 "traddr": "10.0.0.2", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "4420", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:52.065 "hdgst": false, 00:26:52.065 "ddgst": false 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 },{ 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme9", 00:26:52.065 "trtype": "tcp", 00:26:52.065 "traddr": "10.0.0.2", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "4420", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:52.065 "hdgst": false, 00:26:52.065 "ddgst": false 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 },{ 00:26:52.065 "params": { 00:26:52.065 "name": "Nvme10", 00:26:52.065 "trtype": "tcp", 00:26:52.065 "traddr": "10.0.0.2", 00:26:52.065 "adrfam": "ipv4", 00:26:52.065 "trsvcid": "4420", 00:26:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:52.065 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:52.065 "hdgst": false, 00:26:52.065 "ddgst": false 00:26:52.065 }, 00:26:52.065 "method": "bdev_nvme_attach_controller" 00:26:52.065 }' 00:26:52.065 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.065 [2024-07-11 13:56:54.334153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.065 [2024-07-11 13:56:54.371933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.445 Running I/O for 1 seconds... 00:26:54.450 00:26:54.450 Latency(us) 00:26:54.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.450 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:54.450 Verification LBA range: start 0x0 length 0x400 00:26:54.450 Nvme1n1 : 1.08 482.33 30.15 0.00 0.00 130679.96 17552.25 103489.89 00:26:54.450 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:54.450 Verification LBA range: start 0x0 length 0x400 00:26:54.450 Nvme2n1 : 1.08 481.70 30.11 0.00 0.00 130122.53 16526.47 101210.38 00:26:54.450 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:54.450 Verification LBA range: start 0x0 length 0x400 00:26:54.450 Nvme3n1 : 1.06 455.69 28.48 0.00 0.00 135343.37 33964.74 127652.73 00:26:54.450 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:54.450 Verification LBA range: start 0x0 length 0x400 00:26:54.450 Nvme4n1 : 1.09 480.44 30.03 0.00 0.00 128744.56 18008.15 98474.96 00:26:54.450 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:54.450 Verification LBA range: start 0x0 length 0x400 00:26:54.450 Nvme5n1 : 1.09 479.97 30.00 0.00 0.00 128075.77 18008.15 98474.96 00:26:54.450 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:54.450 Verification LBA range: start 0x0 length 0x400 00:26:54.450 Nvme6n1 : 1.09 479.51 29.97 0.00 0.00 127423.36 18122.13 100754.48 00:26:54.450 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:54.450 Verification LBA range: start 0x0 length 0x400 00:26:54.450 Nvme7n1 : 1.09 478.61 29.91 0.00 0.00 126828.37 18008.15 101210.38 00:26:54.450 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:54.450 Verification LBA range: start 0x0 length 0x400 00:26:54.450 Nvme8n1 : 1.09 477.15 29.82 0.00 0.00 126337.10 19489.84 103033.99 00:26:54.450 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:54.450 Verification LBA range: start 0x0 length 0x400 00:26:54.450 Nvme9n1 : 1.10 476.22 29.76 0.00 0.00 125931.20 17894.18 106681.21 00:26:54.450 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:54.450 Verification LBA range: start 0x0 length 0x400 00:26:54.450 Nvme10n1 : 1.09 484.97 30.31 0.00 0.00 122891.94 8206.25 137682.59 00:26:54.450 =================================================================================================================== 00:26:54.450 Total : 4776.60 298.54 0.00 0.00 128176.17 8206.25 137682.59 00:26:54.710 13:56:57 -- target/shutdown.sh@93 -- # stoptarget 00:26:54.710 13:56:57 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:54.710 13:56:57 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:54.710 13:56:57 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:54.710 13:56:57 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:54.710 13:56:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:54.710 13:56:57 -- nvmf/common.sh@116 -- # sync 00:26:54.710 13:56:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:54.710 13:56:57 -- nvmf/common.sh@119 -- # set +e 00:26:54.710 13:56:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:54.710 13:56:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:54.710 rmmod nvme_tcp 00:26:54.710 rmmod nvme_fabrics 00:26:54.710 rmmod nvme_keyring 00:26:54.710 13:56:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:54.710 13:56:57 -- nvmf/common.sh@123 -- # set -e 00:26:54.710 13:56:57 -- nvmf/common.sh@124 -- # return 0 00:26:54.710 13:56:57 -- nvmf/common.sh@477 -- # '[' -n 1704886 ']' 00:26:54.710 13:56:57 -- nvmf/common.sh@478 -- # killprocess 1704886 00:26:54.710 13:56:57 -- common/autotest_common.sh@926 -- # '[' -z 1704886 ']' 00:26:54.710 13:56:57 -- common/autotest_common.sh@930 -- # kill -0 1704886 00:26:54.710 13:56:57 -- common/autotest_common.sh@931 -- # uname 00:26:54.710 13:56:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:54.710 13:56:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1704886 00:26:54.710 13:56:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:54.710 13:56:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:54.710 13:56:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1704886' 00:26:54.710 killing process with pid 1704886 00:26:54.710 13:56:57 -- common/autotest_common.sh@945 -- # kill 1704886 00:26:54.710 13:56:57 -- common/autotest_common.sh@950 -- # wait 1704886 00:26:55.279 13:56:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:55.279 13:56:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:55.279 13:56:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:55.279 13:56:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:55.279 13:56:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:55.279 13:56:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.279 13:56:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.279 13:56:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.188 13:56:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:57.188 00:26:57.188 real 0m14.605s 00:26:57.188 user 0m33.655s 00:26:57.188 sys 0m5.289s 00:26:57.188 13:56:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:57.188 13:56:59 -- common/autotest_common.sh@10 -- # set +x 00:26:57.188 ************************************ 00:26:57.188 END TEST nvmf_shutdown_tc1 00:26:57.188 ************************************ 00:26:57.188 13:56:59 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:57.188 13:56:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:57.188 13:56:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:57.188 13:56:59 -- common/autotest_common.sh@10 -- # set +x 00:26:57.188 ************************************ 00:26:57.188 START TEST nvmf_shutdown_tc2 00:26:57.188 ************************************ 00:26:57.188 13:56:59 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:26:57.188 13:56:59 -- target/shutdown.sh@98 -- # starttarget 00:26:57.188 13:56:59 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:57.188 13:56:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:57.188 13:56:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.188 13:56:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:57.188 13:56:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:57.188 13:56:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:57.188 13:56:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.188 13:56:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:57.188 13:56:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.188 13:56:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:57.188 13:56:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:57.188 13:56:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:57.188 13:56:59 -- common/autotest_common.sh@10 -- # set +x 00:26:57.188 13:56:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:57.188 13:56:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:57.188 13:56:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:57.188 13:56:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:57.188 13:56:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:57.188 13:56:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:57.188 13:56:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:57.188 13:56:59 -- nvmf/common.sh@294 -- # net_devs=() 00:26:57.188 13:56:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:57.188 13:56:59 -- nvmf/common.sh@295 -- # e810=() 00:26:57.188 13:56:59 -- nvmf/common.sh@295 -- # local -ga e810 00:26:57.188 13:56:59 -- nvmf/common.sh@296 -- # x722=() 00:26:57.188 13:56:59 -- nvmf/common.sh@296 -- # local -ga x722 00:26:57.188 13:56:59 -- nvmf/common.sh@297 -- # mlx=() 00:26:57.188 13:56:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:57.188 13:56:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.188 13:56:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.188 13:56:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.188 13:56:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.188 13:56:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.188 13:56:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.188 13:56:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.188 13:56:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.188 13:56:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.188 13:56:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.188 13:56:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.188 13:56:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:57.188 13:56:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:57.188 13:56:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:57.188 13:56:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:57.188 13:56:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:57.188 13:56:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:57.188 13:56:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:57.188 13:56:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:57.188 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:57.188 13:56:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:57.188 13:56:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:57.188 13:56:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.188 13:56:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.188 13:56:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:57.188 13:56:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:57.188 13:56:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:57.188 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:57.188 13:56:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:57.188 13:56:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:57.188 13:56:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.189 13:56:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.189 13:56:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:57.189 13:56:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:57.189 13:56:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:57.189 13:56:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:57.189 13:56:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:57.189 13:56:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.189 13:56:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:57.189 13:56:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.189 13:56:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:57.189 Found net devices under 0000:86:00.0: cvl_0_0 00:26:57.189 13:56:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.189 13:56:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:57.189 13:56:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.189 13:56:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:57.189 13:56:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.189 13:56:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:57.189 Found net devices under 0000:86:00.1: cvl_0_1 00:26:57.189 13:56:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.189 13:56:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:57.189 13:56:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:57.189 13:56:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:57.189 13:56:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:57.189 13:56:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:57.189 13:56:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.189 13:56:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.189 13:56:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.189 13:56:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:57.189 13:56:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.189 13:56:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.189 13:56:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:57.189 13:56:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.189 13:56:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.189 13:56:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:57.189 13:56:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:57.189 13:56:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.189 13:56:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.448 13:56:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.448 13:56:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.448 13:56:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:57.448 13:56:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.448 13:56:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.448 13:56:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.448 13:56:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:57.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:26:57.448 00:26:57.448 --- 10.0.0.2 ping statistics --- 00:26:57.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.448 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:26:57.448 13:56:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:26:57.448 00:26:57.448 --- 10.0.0.1 ping statistics --- 00:26:57.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.448 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:26:57.448 13:56:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.448 13:56:59 -- nvmf/common.sh@410 -- # return 0 00:26:57.448 13:56:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:57.448 13:56:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.448 13:56:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:57.448 13:56:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:57.448 13:56:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.448 13:56:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:57.448 13:56:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:57.448 13:56:59 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:57.448 13:56:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:57.448 13:56:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:57.448 13:56:59 -- common/autotest_common.sh@10 -- # set +x 00:26:57.448 13:56:59 -- nvmf/common.sh@469 -- # nvmfpid=1706708 00:26:57.448 13:56:59 -- nvmf/common.sh@470 -- # waitforlisten 1706708 00:26:57.448 13:56:59 -- common/autotest_common.sh@819 -- # '[' -z 1706708 ']' 00:26:57.448 13:56:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.448 13:56:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:57.448 13:56:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.449 13:56:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:57.449 13:56:59 -- common/autotest_common.sh@10 -- # set +x 00:26:57.449 13:56:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:57.707 [2024-07-11 13:56:59.910021] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:57.707 [2024-07-11 13:56:59.910061] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.707 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.707 [2024-07-11 13:56:59.966014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:57.707 [2024-07-11 13:57:00.006080] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:57.707 [2024-07-11 13:57:00.006199] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.707 [2024-07-11 13:57:00.006209] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.707 [2024-07-11 13:57:00.006216] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.707 [2024-07-11 13:57:00.006329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:57.707 [2024-07-11 13:57:00.006415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:57.707 [2024-07-11 13:57:00.006523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.707 [2024-07-11 13:57:00.006524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:58.274 13:57:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:58.274 13:57:00 -- common/autotest_common.sh@852 -- # return 0 00:26:58.274 13:57:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:58.274 13:57:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:58.274 13:57:00 -- common/autotest_common.sh@10 -- # set +x 00:26:58.533 13:57:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.533 13:57:00 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:58.533 13:57:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:58.533 13:57:00 -- common/autotest_common.sh@10 -- # set +x 00:26:58.533 [2024-07-11 13:57:00.758656] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.533 13:57:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:58.533 13:57:00 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:58.533 13:57:00 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:58.533 13:57:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:58.533 13:57:00 -- common/autotest_common.sh@10 -- # set +x 00:26:58.533 13:57:00 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:58.533 13:57:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:58.533 13:57:00 -- target/shutdown.sh@28 -- # cat 00:26:58.533 13:57:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:58.533 13:57:00 -- target/shutdown.sh@28 -- # cat 00:26:58.533 13:57:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:58.533 13:57:00 -- target/shutdown.sh@28 -- # cat 00:26:58.533 13:57:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:58.533 13:57:00 -- target/shutdown.sh@28 -- # cat 00:26:58.533 13:57:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:58.533 13:57:00 -- target/shutdown.sh@28 -- # cat 00:26:58.534 13:57:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:58.534 13:57:00 -- target/shutdown.sh@28 -- # cat 00:26:58.534 13:57:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:58.534 13:57:00 -- target/shutdown.sh@28 -- # cat 00:26:58.534 13:57:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:58.534 13:57:00 -- target/shutdown.sh@28 -- # cat 00:26:58.534 13:57:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:58.534 13:57:00 -- target/shutdown.sh@28 -- # cat 00:26:58.534 13:57:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:58.534 13:57:00 -- target/shutdown.sh@28 -- # cat 00:26:58.534 13:57:00 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:58.534 13:57:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:58.534 13:57:00 -- common/autotest_common.sh@10 -- # set +x 00:26:58.534 Malloc1 00:26:58.534 [2024-07-11 13:57:00.854252] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.534 Malloc2 00:26:58.534 Malloc3 00:26:58.534 Malloc4 00:26:58.793 Malloc5 00:26:58.793 Malloc6 00:26:58.793 Malloc7 00:26:58.793 Malloc8 00:26:58.793 Malloc9 00:26:58.793 Malloc10 00:26:59.054 13:57:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.054 13:57:01 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:59.054 13:57:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:59.054 13:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:59.054 13:57:01 -- target/shutdown.sh@102 -- # perfpid=1707065 00:26:59.054 13:57:01 -- target/shutdown.sh@103 -- # waitforlisten 1707065 /var/tmp/bdevperf.sock 00:26:59.054 13:57:01 -- common/autotest_common.sh@819 -- # '[' -z 1707065 ']' 00:26:59.054 13:57:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:59.054 13:57:01 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:59.054 13:57:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:59.054 13:57:01 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:59.054 13:57:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:59.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:59.054 13:57:01 -- nvmf/common.sh@520 -- # config=() 00:26:59.054 13:57:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:59.054 13:57:01 -- common/autotest_common.sh@10 -- # set +x 00:26:59.054 13:57:01 -- nvmf/common.sh@520 -- # local subsystem config 00:26:59.054 13:57:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:59.054 13:57:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:59.054 { 00:26:59.054 "params": { 00:26:59.054 "name": "Nvme$subsystem", 00:26:59.054 "trtype": "$TEST_TRANSPORT", 00:26:59.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.054 "adrfam": "ipv4", 00:26:59.054 "trsvcid": "$NVMF_PORT", 00:26:59.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.054 "hdgst": ${hdgst:-false}, 00:26:59.054 "ddgst": ${ddgst:-false} 00:26:59.054 }, 00:26:59.054 "method": "bdev_nvme_attach_controller" 00:26:59.054 } 00:26:59.054 EOF 00:26:59.054 )") 00:26:59.054 13:57:01 -- nvmf/common.sh@542 -- # cat 00:26:59.054 13:57:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:59.054 13:57:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:59.054 { 00:26:59.054 "params": { 00:26:59.054 "name": "Nvme$subsystem", 00:26:59.054 "trtype": "$TEST_TRANSPORT", 00:26:59.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.054 "adrfam": "ipv4", 00:26:59.054 "trsvcid": "$NVMF_PORT", 00:26:59.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.054 "hdgst": ${hdgst:-false}, 00:26:59.054 "ddgst": ${ddgst:-false} 00:26:59.054 }, 00:26:59.054 "method": "bdev_nvme_attach_controller" 00:26:59.054 } 00:26:59.054 EOF 00:26:59.054 )") 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # cat 00:26:59.055 13:57:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:59.055 { 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme$subsystem", 00:26:59.055 "trtype": "$TEST_TRANSPORT", 00:26:59.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "$NVMF_PORT", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.055 "hdgst": ${hdgst:-false}, 00:26:59.055 "ddgst": ${ddgst:-false} 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 } 00:26:59.055 EOF 00:26:59.055 )") 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # cat 00:26:59.055 13:57:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:59.055 { 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme$subsystem", 00:26:59.055 "trtype": "$TEST_TRANSPORT", 00:26:59.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "$NVMF_PORT", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.055 "hdgst": ${hdgst:-false}, 00:26:59.055 "ddgst": ${ddgst:-false} 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 } 00:26:59.055 EOF 00:26:59.055 )") 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # cat 00:26:59.055 13:57:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:59.055 { 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme$subsystem", 00:26:59.055 "trtype": "$TEST_TRANSPORT", 00:26:59.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "$NVMF_PORT", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.055 "hdgst": ${hdgst:-false}, 00:26:59.055 "ddgst": ${ddgst:-false} 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 } 00:26:59.055 EOF 00:26:59.055 )") 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # cat 00:26:59.055 13:57:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:59.055 { 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme$subsystem", 00:26:59.055 "trtype": "$TEST_TRANSPORT", 00:26:59.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "$NVMF_PORT", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.055 "hdgst": ${hdgst:-false}, 00:26:59.055 "ddgst": ${ddgst:-false} 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 } 00:26:59.055 EOF 00:26:59.055 )") 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # cat 00:26:59.055 [2024-07-11 13:57:01.325460] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:59.055 [2024-07-11 13:57:01.325510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707065 ] 00:26:59.055 13:57:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:59.055 { 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme$subsystem", 00:26:59.055 "trtype": "$TEST_TRANSPORT", 00:26:59.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "$NVMF_PORT", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.055 "hdgst": ${hdgst:-false}, 00:26:59.055 "ddgst": ${ddgst:-false} 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 } 00:26:59.055 EOF 00:26:59.055 )") 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # cat 00:26:59.055 13:57:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:59.055 { 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme$subsystem", 00:26:59.055 "trtype": "$TEST_TRANSPORT", 00:26:59.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "$NVMF_PORT", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.055 "hdgst": ${hdgst:-false}, 00:26:59.055 "ddgst": ${ddgst:-false} 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 } 00:26:59.055 EOF 00:26:59.055 )") 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # cat 00:26:59.055 13:57:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:59.055 { 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme$subsystem", 00:26:59.055 "trtype": "$TEST_TRANSPORT", 00:26:59.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "$NVMF_PORT", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.055 "hdgst": ${hdgst:-false}, 00:26:59.055 "ddgst": ${ddgst:-false} 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 } 00:26:59.055 EOF 00:26:59.055 )") 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # cat 00:26:59.055 13:57:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:59.055 { 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme$subsystem", 00:26:59.055 "trtype": "$TEST_TRANSPORT", 00:26:59.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "$NVMF_PORT", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.055 "hdgst": ${hdgst:-false}, 00:26:59.055 "ddgst": ${ddgst:-false} 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 } 00:26:59.055 EOF 00:26:59.055 )") 00:26:59.055 13:57:01 -- nvmf/common.sh@542 -- # cat 00:26:59.055 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.055 13:57:01 -- nvmf/common.sh@544 -- # jq . 00:26:59.055 13:57:01 -- nvmf/common.sh@545 -- # IFS=, 00:26:59.055 13:57:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme1", 00:26:59.055 "trtype": "tcp", 00:26:59.055 "traddr": "10.0.0.2", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "4420", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.055 "hdgst": false, 00:26:59.055 "ddgst": false 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 },{ 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme2", 00:26:59.055 "trtype": "tcp", 00:26:59.055 "traddr": "10.0.0.2", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "4420", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:59.055 "hdgst": false, 00:26:59.055 "ddgst": false 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 },{ 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme3", 00:26:59.055 "trtype": "tcp", 00:26:59.055 "traddr": "10.0.0.2", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "4420", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:59.055 "hdgst": false, 00:26:59.055 "ddgst": false 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 },{ 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme4", 00:26:59.055 "trtype": "tcp", 00:26:59.055 "traddr": "10.0.0.2", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "4420", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:59.055 "hdgst": false, 00:26:59.055 "ddgst": false 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 },{ 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme5", 00:26:59.055 "trtype": "tcp", 00:26:59.055 "traddr": "10.0.0.2", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "4420", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:59.055 "hdgst": false, 00:26:59.055 "ddgst": false 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 },{ 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme6", 00:26:59.055 "trtype": "tcp", 00:26:59.055 "traddr": "10.0.0.2", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "4420", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:59.055 "hdgst": false, 00:26:59.055 "ddgst": false 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 },{ 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme7", 00:26:59.055 "trtype": "tcp", 00:26:59.055 "traddr": "10.0.0.2", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "4420", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:59.055 "hdgst": false, 00:26:59.055 "ddgst": false 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 },{ 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme8", 00:26:59.055 "trtype": "tcp", 00:26:59.055 "traddr": "10.0.0.2", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "4420", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:59.055 "hdgst": false, 00:26:59.055 "ddgst": false 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 },{ 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme9", 00:26:59.055 "trtype": "tcp", 00:26:59.055 "traddr": "10.0.0.2", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "4420", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:59.055 "hdgst": false, 00:26:59.055 "ddgst": false 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 },{ 00:26:59.055 "params": { 00:26:59.055 "name": "Nvme10", 00:26:59.055 "trtype": "tcp", 00:26:59.055 "traddr": "10.0.0.2", 00:26:59.055 "adrfam": "ipv4", 00:26:59.055 "trsvcid": "4420", 00:26:59.055 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:59.055 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:59.055 "hdgst": false, 00:26:59.055 "ddgst": false 00:26:59.055 }, 00:26:59.055 "method": "bdev_nvme_attach_controller" 00:26:59.055 }' 00:26:59.055 [2024-07-11 13:57:01.380802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.055 [2024-07-11 13:57:01.418754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.960 Running I/O for 10 seconds... 00:27:01.220 13:57:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:01.220 13:57:03 -- common/autotest_common.sh@852 -- # return 0 00:27:01.220 13:57:03 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:01.220 13:57:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.220 13:57:03 -- common/autotest_common.sh@10 -- # set +x 00:27:01.220 13:57:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.220 13:57:03 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:01.220 13:57:03 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:01.220 13:57:03 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:01.220 13:57:03 -- target/shutdown.sh@57 -- # local ret=1 00:27:01.220 13:57:03 -- target/shutdown.sh@58 -- # local i 00:27:01.220 13:57:03 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:01.220 13:57:03 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:01.220 13:57:03 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:01.220 13:57:03 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:01.220 13:57:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.220 13:57:03 -- common/autotest_common.sh@10 -- # set +x 00:27:01.220 13:57:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.220 13:57:03 -- target/shutdown.sh@60 -- # read_io_count=254 00:27:01.220 13:57:03 -- target/shutdown.sh@63 -- # '[' 254 -ge 100 ']' 00:27:01.220 13:57:03 -- target/shutdown.sh@64 -- # ret=0 00:27:01.220 13:57:03 -- target/shutdown.sh@65 -- # break 00:27:01.220 13:57:03 -- target/shutdown.sh@69 -- # return 0 00:27:01.220 13:57:03 -- target/shutdown.sh@109 -- # killprocess 1707065 00:27:01.220 13:57:03 -- common/autotest_common.sh@926 -- # '[' -z 1707065 ']' 00:27:01.220 13:57:03 -- common/autotest_common.sh@930 -- # kill -0 1707065 00:27:01.220 13:57:03 -- common/autotest_common.sh@931 -- # uname 00:27:01.220 13:57:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:01.220 13:57:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1707065 00:27:01.220 13:57:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:01.220 13:57:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:01.220 13:57:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1707065' 00:27:01.220 killing process with pid 1707065 00:27:01.220 13:57:03 -- common/autotest_common.sh@945 -- # kill 1707065 00:27:01.220 13:57:03 -- common/autotest_common.sh@950 -- # wait 1707065 00:27:01.480 Received shutdown signal, test time was about 0.723278 seconds 00:27:01.480 00:27:01.480 Latency(us) 00:27:01.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.480 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:01.480 Verification LBA range: start 0x0 length 0x400 00:27:01.480 Nvme1n1 : 0.67 472.16 29.51 0.00 0.00 132534.87 16868.40 128564.54 00:27:01.480 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:01.480 Verification LBA range: start 0x0 length 0x400 00:27:01.480 Nvme2n1 : 0.69 516.81 32.30 0.00 0.00 120309.57 12993.22 101666.28 00:27:01.480 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:01.480 Verification LBA range: start 0x0 length 0x400 00:27:01.480 Nvme3n1 : 0.69 453.56 28.35 0.00 0.00 135809.41 13677.08 127652.73 00:27:01.480 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:01.480 Verification LBA range: start 0x0 length 0x400 00:27:01.480 Nvme4n1 : 0.69 516.00 32.25 0.00 0.00 118187.05 12993.22 105769.41 00:27:01.480 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:01.480 Verification LBA range: start 0x0 length 0x400 00:27:01.480 Nvme5n1 : 0.69 515.22 32.20 0.00 0.00 116785.07 16298.52 94827.74 00:27:01.480 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:01.480 Verification LBA range: start 0x0 length 0x400 00:27:01.480 Nvme6n1 : 0.69 513.67 32.10 0.00 0.00 116183.09 14702.86 90724.62 00:27:01.480 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:01.480 Verification LBA range: start 0x0 length 0x400 00:27:01.480 Nvme7n1 : 0.67 473.95 29.62 0.00 0.00 124342.31 13620.09 98019.06 00:27:01.480 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:01.480 Verification LBA range: start 0x0 length 0x400 00:27:01.480 Nvme8n1 : 0.69 512.89 32.06 0.00 0.00 114029.83 14417.92 99842.67 00:27:01.480 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:01.480 Verification LBA range: start 0x0 length 0x400 00:27:01.480 Nvme9n1 : 0.68 466.80 29.17 0.00 0.00 123124.00 4131.62 101210.38 00:27:01.480 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:01.480 Verification LBA range: start 0x0 length 0x400 00:27:01.480 Nvme10n1 : 0.72 383.33 23.96 0.00 0.00 141010.31 7750.34 149536.06 00:27:01.480 =================================================================================================================== 00:27:01.480 Total : 4824.39 301.52 0.00 0.00 123608.48 4131.62 149536.06 00:27:01.480 13:57:03 -- target/shutdown.sh@112 -- # sleep 1 00:27:02.857 13:57:04 -- target/shutdown.sh@113 -- # kill -0 1706708 00:27:02.857 13:57:04 -- target/shutdown.sh@115 -- # stoptarget 00:27:02.857 13:57:04 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:02.857 13:57:04 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:02.857 13:57:04 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:02.857 13:57:04 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:02.857 13:57:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:02.857 13:57:04 -- nvmf/common.sh@116 -- # sync 00:27:02.857 13:57:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:02.857 13:57:04 -- nvmf/common.sh@119 -- # set +e 00:27:02.857 13:57:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:02.857 13:57:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:02.857 rmmod nvme_tcp 00:27:02.857 rmmod nvme_fabrics 00:27:02.857 rmmod nvme_keyring 00:27:02.857 13:57:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:02.857 13:57:04 -- nvmf/common.sh@123 -- # set -e 00:27:02.857 13:57:04 -- nvmf/common.sh@124 -- # return 0 00:27:02.857 13:57:04 -- nvmf/common.sh@477 -- # '[' -n 1706708 ']' 00:27:02.857 13:57:04 -- nvmf/common.sh@478 -- # killprocess 1706708 00:27:02.857 13:57:04 -- common/autotest_common.sh@926 -- # '[' -z 1706708 ']' 00:27:02.857 13:57:04 -- common/autotest_common.sh@930 -- # kill -0 1706708 00:27:02.857 13:57:04 -- common/autotest_common.sh@931 -- # uname 00:27:02.857 13:57:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:02.857 13:57:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1706708 00:27:02.857 13:57:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:02.857 13:57:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:02.857 13:57:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1706708' 00:27:02.857 killing process with pid 1706708 00:27:02.857 13:57:05 -- common/autotest_common.sh@945 -- # kill 1706708 00:27:02.857 13:57:05 -- common/autotest_common.sh@950 -- # wait 1706708 00:27:03.116 13:57:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:03.116 13:57:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:03.116 13:57:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:03.116 13:57:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.116 13:57:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:03.116 13:57:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.116 13:57:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.116 13:57:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.022 13:57:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:05.022 00:27:05.022 real 0m7.857s 00:27:05.022 user 0m24.160s 00:27:05.022 sys 0m1.297s 00:27:05.022 13:57:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.022 13:57:07 -- common/autotest_common.sh@10 -- # set +x 00:27:05.022 ************************************ 00:27:05.022 END TEST nvmf_shutdown_tc2 00:27:05.022 ************************************ 00:27:05.282 13:57:07 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:05.282 13:57:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:05.282 13:57:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:05.282 13:57:07 -- common/autotest_common.sh@10 -- # set +x 00:27:05.282 ************************************ 00:27:05.282 START TEST nvmf_shutdown_tc3 00:27:05.282 ************************************ 00:27:05.282 13:57:07 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:27:05.282 13:57:07 -- target/shutdown.sh@120 -- # starttarget 00:27:05.282 13:57:07 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:05.282 13:57:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:05.282 13:57:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.282 13:57:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:05.282 13:57:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:05.282 13:57:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:05.282 13:57:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.282 13:57:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.282 13:57:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.282 13:57:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:05.282 13:57:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:05.282 13:57:07 -- common/autotest_common.sh@10 -- # set +x 00:27:05.282 13:57:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:05.282 13:57:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:05.282 13:57:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:05.282 13:57:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:05.282 13:57:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:05.282 13:57:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:05.282 13:57:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:05.282 13:57:07 -- nvmf/common.sh@294 -- # net_devs=() 00:27:05.282 13:57:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:05.282 13:57:07 -- nvmf/common.sh@295 -- # e810=() 00:27:05.282 13:57:07 -- nvmf/common.sh@295 -- # local -ga e810 00:27:05.282 13:57:07 -- nvmf/common.sh@296 -- # x722=() 00:27:05.282 13:57:07 -- nvmf/common.sh@296 -- # local -ga x722 00:27:05.282 13:57:07 -- nvmf/common.sh@297 -- # mlx=() 00:27:05.282 13:57:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:05.282 13:57:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.282 13:57:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.282 13:57:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.282 13:57:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.282 13:57:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.282 13:57:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.282 13:57:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.282 13:57:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.282 13:57:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.282 13:57:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.282 13:57:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.282 13:57:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:05.282 13:57:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:05.282 13:57:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:05.282 13:57:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:05.282 13:57:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:05.282 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:05.282 13:57:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:05.282 13:57:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:05.282 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:05.282 13:57:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:05.282 13:57:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:05.282 13:57:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:05.282 13:57:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.282 13:57:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:05.282 13:57:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.282 13:57:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:05.282 Found net devices under 0000:86:00.0: cvl_0_0 00:27:05.282 13:57:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.282 13:57:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:05.282 13:57:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.282 13:57:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:05.282 13:57:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.282 13:57:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:05.282 Found net devices under 0000:86:00.1: cvl_0_1 00:27:05.282 13:57:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.283 13:57:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:05.283 13:57:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:05.283 13:57:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:05.283 13:57:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:05.283 13:57:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:05.283 13:57:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.283 13:57:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.283 13:57:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.283 13:57:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:05.283 13:57:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.283 13:57:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.283 13:57:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:05.283 13:57:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.283 13:57:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.283 13:57:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:05.283 13:57:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:05.283 13:57:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.283 13:57:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.283 13:57:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.283 13:57:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.283 13:57:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:05.283 13:57:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.283 13:57:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.542 13:57:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.542 13:57:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:05.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:27:05.542 00:27:05.542 --- 10.0.0.2 ping statistics --- 00:27:05.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.542 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:27:05.542 13:57:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:27:05.542 00:27:05.542 --- 10.0.0.1 ping statistics --- 00:27:05.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.542 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:27:05.542 13:57:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.542 13:57:07 -- nvmf/common.sh@410 -- # return 0 00:27:05.542 13:57:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:05.542 13:57:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.542 13:57:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:05.542 13:57:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:05.542 13:57:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.542 13:57:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:05.542 13:57:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:05.542 13:57:07 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:05.542 13:57:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:05.542 13:57:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:05.542 13:57:07 -- common/autotest_common.sh@10 -- # set +x 00:27:05.542 13:57:07 -- nvmf/common.sh@469 -- # nvmfpid=1708379 00:27:05.542 13:57:07 -- nvmf/common.sh@470 -- # waitforlisten 1708379 00:27:05.542 13:57:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:05.542 13:57:07 -- common/autotest_common.sh@819 -- # '[' -z 1708379 ']' 00:27:05.542 13:57:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.542 13:57:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:05.542 13:57:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.542 13:57:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:05.542 13:57:07 -- common/autotest_common.sh@10 -- # set +x 00:27:05.542 [2024-07-11 13:57:07.869091] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:05.542 [2024-07-11 13:57:07.869134] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.542 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.542 [2024-07-11 13:57:07.928245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.542 [2024-07-11 13:57:07.967104] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:05.542 [2024-07-11 13:57:07.967239] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.542 [2024-07-11 13:57:07.967248] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.542 [2024-07-11 13:57:07.967255] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.542 [2024-07-11 13:57:07.967361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.542 [2024-07-11 13:57:07.967470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.542 [2024-07-11 13:57:07.967577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.542 [2024-07-11 13:57:07.967578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:06.479 13:57:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:06.479 13:57:08 -- common/autotest_common.sh@852 -- # return 0 00:27:06.479 13:57:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:06.479 13:57:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:06.479 13:57:08 -- common/autotest_common.sh@10 -- # set +x 00:27:06.479 13:57:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.479 13:57:08 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:06.479 13:57:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:06.479 13:57:08 -- common/autotest_common.sh@10 -- # set +x 00:27:06.479 [2024-07-11 13:57:08.707594] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.479 13:57:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:06.479 13:57:08 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:06.479 13:57:08 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:06.479 13:57:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:06.479 13:57:08 -- common/autotest_common.sh@10 -- # set +x 00:27:06.479 13:57:08 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:06.479 13:57:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.479 13:57:08 -- target/shutdown.sh@28 -- # cat 00:27:06.479 13:57:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.479 13:57:08 -- target/shutdown.sh@28 -- # cat 00:27:06.479 13:57:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.479 13:57:08 -- target/shutdown.sh@28 -- # cat 00:27:06.479 13:57:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.479 13:57:08 -- target/shutdown.sh@28 -- # cat 00:27:06.479 13:57:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.479 13:57:08 -- target/shutdown.sh@28 -- # cat 00:27:06.479 13:57:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.479 13:57:08 -- target/shutdown.sh@28 -- # cat 00:27:06.479 13:57:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.479 13:57:08 -- target/shutdown.sh@28 -- # cat 00:27:06.479 13:57:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.479 13:57:08 -- target/shutdown.sh@28 -- # cat 00:27:06.479 13:57:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.479 13:57:08 -- target/shutdown.sh@28 -- # cat 00:27:06.479 13:57:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:06.479 13:57:08 -- target/shutdown.sh@28 -- # cat 00:27:06.479 13:57:08 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:06.479 13:57:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:06.479 13:57:08 -- common/autotest_common.sh@10 -- # set +x 00:27:06.480 Malloc1 00:27:06.480 [2024-07-11 13:57:08.799331] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.480 Malloc2 00:27:06.480 Malloc3 00:27:06.480 Malloc4 00:27:06.739 Malloc5 00:27:06.739 Malloc6 00:27:06.739 Malloc7 00:27:06.739 Malloc8 00:27:06.739 Malloc9 00:27:06.739 Malloc10 00:27:06.739 13:57:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:06.739 13:57:09 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:06.739 13:57:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:06.739 13:57:09 -- common/autotest_common.sh@10 -- # set +x 00:27:06.998 13:57:09 -- target/shutdown.sh@124 -- # perfpid=1708954 00:27:06.998 13:57:09 -- target/shutdown.sh@125 -- # waitforlisten 1708954 /var/tmp/bdevperf.sock 00:27:06.998 13:57:09 -- common/autotest_common.sh@819 -- # '[' -z 1708954 ']' 00:27:06.998 13:57:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:06.998 13:57:09 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:06.998 13:57:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:06.998 13:57:09 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:06.998 13:57:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:06.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:06.998 13:57:09 -- nvmf/common.sh@520 -- # config=() 00:27:06.998 13:57:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:06.998 13:57:09 -- nvmf/common.sh@520 -- # local subsystem config 00:27:06.998 13:57:09 -- common/autotest_common.sh@10 -- # set +x 00:27:06.998 13:57:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:06.998 { 00:27:06.998 "params": { 00:27:06.998 "name": "Nvme$subsystem", 00:27:06.998 "trtype": "$TEST_TRANSPORT", 00:27:06.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.998 "adrfam": "ipv4", 00:27:06.998 "trsvcid": "$NVMF_PORT", 00:27:06.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.998 "hdgst": ${hdgst:-false}, 00:27:06.998 "ddgst": ${ddgst:-false} 00:27:06.998 }, 00:27:06.998 "method": "bdev_nvme_attach_controller" 00:27:06.998 } 00:27:06.998 EOF 00:27:06.998 )") 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # cat 00:27:06.998 13:57:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:06.998 { 00:27:06.998 "params": { 00:27:06.998 "name": "Nvme$subsystem", 00:27:06.998 "trtype": "$TEST_TRANSPORT", 00:27:06.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.998 "adrfam": "ipv4", 00:27:06.998 "trsvcid": "$NVMF_PORT", 00:27:06.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.998 "hdgst": ${hdgst:-false}, 00:27:06.998 "ddgst": ${ddgst:-false} 00:27:06.998 }, 00:27:06.998 "method": "bdev_nvme_attach_controller" 00:27:06.998 } 00:27:06.998 EOF 00:27:06.998 )") 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # cat 00:27:06.998 13:57:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:06.998 { 00:27:06.998 "params": { 00:27:06.998 "name": "Nvme$subsystem", 00:27:06.998 "trtype": "$TEST_TRANSPORT", 00:27:06.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.998 "adrfam": "ipv4", 00:27:06.998 "trsvcid": "$NVMF_PORT", 00:27:06.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.998 "hdgst": ${hdgst:-false}, 00:27:06.998 "ddgst": ${ddgst:-false} 00:27:06.998 }, 00:27:06.998 "method": "bdev_nvme_attach_controller" 00:27:06.998 } 00:27:06.998 EOF 00:27:06.998 )") 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # cat 00:27:06.998 13:57:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:06.998 { 00:27:06.998 "params": { 00:27:06.998 "name": "Nvme$subsystem", 00:27:06.998 "trtype": "$TEST_TRANSPORT", 00:27:06.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.998 "adrfam": "ipv4", 00:27:06.998 "trsvcid": "$NVMF_PORT", 00:27:06.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.998 "hdgst": ${hdgst:-false}, 00:27:06.998 "ddgst": ${ddgst:-false} 00:27:06.998 }, 00:27:06.998 "method": "bdev_nvme_attach_controller" 00:27:06.998 } 00:27:06.998 EOF 00:27:06.998 )") 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # cat 00:27:06.998 13:57:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:06.998 { 00:27:06.998 "params": { 00:27:06.998 "name": "Nvme$subsystem", 00:27:06.998 "trtype": "$TEST_TRANSPORT", 00:27:06.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.998 "adrfam": "ipv4", 00:27:06.998 "trsvcid": "$NVMF_PORT", 00:27:06.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.998 "hdgst": ${hdgst:-false}, 00:27:06.998 "ddgst": ${ddgst:-false} 00:27:06.998 }, 00:27:06.998 "method": "bdev_nvme_attach_controller" 00:27:06.998 } 00:27:06.998 EOF 00:27:06.998 )") 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # cat 00:27:06.998 13:57:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:06.998 { 00:27:06.998 "params": { 00:27:06.998 "name": "Nvme$subsystem", 00:27:06.998 "trtype": "$TEST_TRANSPORT", 00:27:06.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.998 "adrfam": "ipv4", 00:27:06.998 "trsvcid": "$NVMF_PORT", 00:27:06.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.998 "hdgst": ${hdgst:-false}, 00:27:06.998 "ddgst": ${ddgst:-false} 00:27:06.998 }, 00:27:06.998 "method": "bdev_nvme_attach_controller" 00:27:06.998 } 00:27:06.998 EOF 00:27:06.998 )") 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # cat 00:27:06.998 [2024-07-11 13:57:09.262614] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:06.998 [2024-07-11 13:57:09.262664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1708954 ] 00:27:06.998 13:57:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:06.998 { 00:27:06.998 "params": { 00:27:06.998 "name": "Nvme$subsystem", 00:27:06.998 "trtype": "$TEST_TRANSPORT", 00:27:06.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.998 "adrfam": "ipv4", 00:27:06.998 "trsvcid": "$NVMF_PORT", 00:27:06.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.998 "hdgst": ${hdgst:-false}, 00:27:06.998 "ddgst": ${ddgst:-false} 00:27:06.998 }, 00:27:06.998 "method": "bdev_nvme_attach_controller" 00:27:06.998 } 00:27:06.998 EOF 00:27:06.998 )") 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # cat 00:27:06.998 13:57:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:06.998 13:57:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:06.998 { 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme$subsystem", 00:27:06.999 "trtype": "$TEST_TRANSPORT", 00:27:06.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "$NVMF_PORT", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.999 "hdgst": ${hdgst:-false}, 00:27:06.999 "ddgst": ${ddgst:-false} 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 } 00:27:06.999 EOF 00:27:06.999 )") 00:27:06.999 13:57:09 -- nvmf/common.sh@542 -- # cat 00:27:06.999 13:57:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:06.999 13:57:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:06.999 { 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme$subsystem", 00:27:06.999 "trtype": "$TEST_TRANSPORT", 00:27:06.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "$NVMF_PORT", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.999 "hdgst": ${hdgst:-false}, 00:27:06.999 "ddgst": ${ddgst:-false} 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 } 00:27:06.999 EOF 00:27:06.999 )") 00:27:06.999 13:57:09 -- nvmf/common.sh@542 -- # cat 00:27:06.999 13:57:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:06.999 13:57:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:06.999 { 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme$subsystem", 00:27:06.999 "trtype": "$TEST_TRANSPORT", 00:27:06.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "$NVMF_PORT", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.999 "hdgst": ${hdgst:-false}, 00:27:06.999 "ddgst": ${ddgst:-false} 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 } 00:27:06.999 EOF 00:27:06.999 )") 00:27:06.999 13:57:09 -- nvmf/common.sh@542 -- # cat 00:27:06.999 13:57:09 -- nvmf/common.sh@544 -- # jq . 00:27:06.999 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.999 13:57:09 -- nvmf/common.sh@545 -- # IFS=, 00:27:06.999 13:57:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme1", 00:27:06.999 "trtype": "tcp", 00:27:06.999 "traddr": "10.0.0.2", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "4420", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:06.999 "hdgst": false, 00:27:06.999 "ddgst": false 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 },{ 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme2", 00:27:06.999 "trtype": "tcp", 00:27:06.999 "traddr": "10.0.0.2", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "4420", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:06.999 "hdgst": false, 00:27:06.999 "ddgst": false 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 },{ 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme3", 00:27:06.999 "trtype": "tcp", 00:27:06.999 "traddr": "10.0.0.2", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "4420", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:06.999 "hdgst": false, 00:27:06.999 "ddgst": false 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 },{ 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme4", 00:27:06.999 "trtype": "tcp", 00:27:06.999 "traddr": "10.0.0.2", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "4420", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:06.999 "hdgst": false, 00:27:06.999 "ddgst": false 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 },{ 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme5", 00:27:06.999 "trtype": "tcp", 00:27:06.999 "traddr": "10.0.0.2", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "4420", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:06.999 "hdgst": false, 00:27:06.999 "ddgst": false 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 },{ 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme6", 00:27:06.999 "trtype": "tcp", 00:27:06.999 "traddr": "10.0.0.2", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "4420", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:06.999 "hdgst": false, 00:27:06.999 "ddgst": false 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 },{ 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme7", 00:27:06.999 "trtype": "tcp", 00:27:06.999 "traddr": "10.0.0.2", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "4420", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:06.999 "hdgst": false, 00:27:06.999 "ddgst": false 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 },{ 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme8", 00:27:06.999 "trtype": "tcp", 00:27:06.999 "traddr": "10.0.0.2", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "4420", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:06.999 "hdgst": false, 00:27:06.999 "ddgst": false 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 },{ 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme9", 00:27:06.999 "trtype": "tcp", 00:27:06.999 "traddr": "10.0.0.2", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "4420", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:06.999 "hdgst": false, 00:27:06.999 "ddgst": false 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 },{ 00:27:06.999 "params": { 00:27:06.999 "name": "Nvme10", 00:27:06.999 "trtype": "tcp", 00:27:06.999 "traddr": "10.0.0.2", 00:27:06.999 "adrfam": "ipv4", 00:27:06.999 "trsvcid": "4420", 00:27:06.999 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:06.999 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:06.999 "hdgst": false, 00:27:06.999 "ddgst": false 00:27:06.999 }, 00:27:06.999 "method": "bdev_nvme_attach_controller" 00:27:06.999 }' 00:27:06.999 [2024-07-11 13:57:09.319461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.999 [2024-07-11 13:57:09.357559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.378 Running I/O for 10 seconds... 00:27:08.378 13:57:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:08.378 13:57:10 -- common/autotest_common.sh@852 -- # return 0 00:27:08.378 13:57:10 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:08.378 13:57:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.378 13:57:10 -- common/autotest_common.sh@10 -- # set +x 00:27:08.378 13:57:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.378 13:57:10 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:08.378 13:57:10 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:08.378 13:57:10 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:08.378 13:57:10 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:08.378 13:57:10 -- target/shutdown.sh@57 -- # local ret=1 00:27:08.378 13:57:10 -- target/shutdown.sh@58 -- # local i 00:27:08.378 13:57:10 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:08.378 13:57:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:08.378 13:57:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:08.378 13:57:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:08.378 13:57:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.378 13:57:10 -- common/autotest_common.sh@10 -- # set +x 00:27:08.378 13:57:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.638 13:57:10 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:08.638 13:57:10 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:08.638 13:57:10 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:08.917 13:57:11 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:08.917 13:57:11 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:08.917 13:57:11 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:08.917 13:57:11 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:08.917 13:57:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.917 13:57:11 -- common/autotest_common.sh@10 -- # set +x 00:27:08.917 13:57:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.917 13:57:11 -- target/shutdown.sh@60 -- # read_io_count=166 00:27:08.917 13:57:11 -- target/shutdown.sh@63 -- # '[' 166 -ge 100 ']' 00:27:08.917 13:57:11 -- target/shutdown.sh@64 -- # ret=0 00:27:08.917 13:57:11 -- target/shutdown.sh@65 -- # break 00:27:08.917 13:57:11 -- target/shutdown.sh@69 -- # return 0 00:27:08.917 13:57:11 -- target/shutdown.sh@134 -- # killprocess 1708379 00:27:08.917 13:57:11 -- common/autotest_common.sh@926 -- # '[' -z 1708379 ']' 00:27:08.917 13:57:11 -- common/autotest_common.sh@930 -- # kill -0 1708379 00:27:08.917 13:57:11 -- common/autotest_common.sh@931 -- # uname 00:27:08.917 13:57:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:08.917 13:57:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1708379 00:27:08.917 13:57:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:08.917 13:57:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:08.917 13:57:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1708379' 00:27:08.917 killing process with pid 1708379 00:27:08.917 13:57:11 -- common/autotest_common.sh@945 -- # kill 1708379 00:27:08.917 13:57:11 -- common/autotest_common.sh@950 -- # wait 1708379 00:27:08.917 [2024-07-11 13:57:11.208410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.917 [2024-07-11 13:57:11.208730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.208830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3140 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.209954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.918 [2024-07-11 13:57:11.209986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.918 [2024-07-11 13:57:11.209999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.918 [2024-07-11 13:57:11.210006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.918 [2024-07-11 13:57:11.210013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.918 [2024-07-11 13:57:11.210020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.918 [2024-07-11 13:57:11.210027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.918 [2024-07-11 13:57:11.210033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.918 [2024-07-11 13:57:11.210040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1db80 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.211731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5ab0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.214242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.918 [2024-07-11 13:57:11.214261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.214557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed35d0 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.215994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.216005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.216011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.216017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.216023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.216029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.216035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.216041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.919 [2024-07-11 13:57:11.216047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.216186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3a80 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed3f10 is same with the state(5) to be set 00:27:08.920 [2024-07-11 13:57:11.217821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.920 [2024-07-11 13:57:11.217843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.920 [2024-07-11 13:57:11.217859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.920 [2024-07-11 13:57:11.217866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.920 [2024-07-11 13:57:11.217875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.920 [2024-07-11 13:57:11.217882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.217890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.217896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.217904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.217911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.217919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.217925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.217933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.217939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.217947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.217954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.217962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.217968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.217976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.217982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.217990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.217997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with t[2024-07-11 13:57:11.218057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27008 len:12he state(5) to be set 00:27:08.921 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with t[2024-07-11 13:57:11.218093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29824 len:1he state(5) to be set 00:27:08.921 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 13:57:11.218120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 he state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with t[2024-07-11 13:57:11.218132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27392 len:12he state(5) to be set 00:27:08.921 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.921 [2024-07-11 13:57:11.218226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.921 [2024-07-11 13:57:11.218232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.921 [2024-07-11 13:57:11.218236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28928 len:12[2024-07-11 13:57:11.218292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 he state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 13:57:11.218301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 he state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with t[2024-07-11 13:57:11.218380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:1he state(5) to be set 00:27:08.922 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:12[2024-07-11 13:57:11.218418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 he state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 13:57:11.218427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 he state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed43a0 is same with the state(5) to be set 00:27:08.922 [2024-07-11 13:57:11.218482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.922 [2024-07-11 13:57:11.218679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.922 [2024-07-11 13:57:11.218687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.923 [2024-07-11 13:57:11.218873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.923 [2024-07-11 13:57:11.218954] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf18400 was disconnected and freed. reset controller. 00:27:08.923 [2024-07-11 13:57:11.220037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4830 is same with the state(5) to be set 00:27:08.923 [2024-07-11 13:57:11.220422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.220987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.220993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.221001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.221007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.221015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.221023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.221031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.221037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.221046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.221052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.221060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.924 [2024-07-11 13:57:11.221066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.924 [2024-07-11 13:57:11.221074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.925 [2024-07-11 13:57:11.221380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.925 [2024-07-11 13:57:11.221448] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xef8420 was disconnected and freed. reset controller. 00:27:08.925 [2024-07-11 13:57:11.222029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.925 [2024-07-11 13:57:11.222241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed4ce0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.926 [2024-07-11 13:57:11.222611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1db80 (9): Bad file descriptor 00:27:08.926 [2024-07-11 13:57:11.222646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.926 [2024-07-11 13:57:11.222655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.926 [2024-07-11 13:57:11.222663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.926 [2024-07-11 13:57:11.222670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.926 [2024-07-11 13:57:11.222677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.926 [2024-07-11 13:57:11.222683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.926 [2024-07-11 13:57:11.222690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.926 [2024-07-11 13:57:11.222697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.926 [2024-07-11 13:57:11.222703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe797d0 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.926 [2024-07-11 13:57:11.222736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.926 [2024-07-11 13:57:11.222743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.926 [2024-07-11 13:57:11.222749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.926 [2024-07-11 13:57:11.222756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.926 [2024-07-11 13:57:11.222762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.926 [2024-07-11 13:57:11.222793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.926 [2024-07-11 13:57:11.222824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.926 [2024-07-11 13:57:11.222856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6ff60 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.222906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.926 [2024-07-11 13:57:11.223368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.223991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.224024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.224056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.224088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.224121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.926 [2024-07-11 13:57:11.224154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.224948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed5600 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.234589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a010 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.234689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfea130 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.234797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e17d0 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.234902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.234967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.234977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7bcd0 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.235006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.235016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.235026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.235034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.235043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.235052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.235064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.235074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.235082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b270 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.235107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.235117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.235127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.235136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.235145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.235154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.235174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.927 [2024-07-11 13:57:11.235184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.235192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe8510 is same with the state(5) to be set 00:27:08.927 [2024-07-11 13:57:11.236588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.927 [2024-07-11 13:57:11.236612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.236627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.927 [2024-07-11 13:57:11.236636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.236648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.927 [2024-07-11 13:57:11.236657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.236668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.927 [2024-07-11 13:57:11.236677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.236688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.927 [2024-07-11 13:57:11.236697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.927 [2024-07-11 13:57:11.236708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.927 [2024-07-11 13:57:11.236717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.236980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.236993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.928 [2024-07-11 13:57:11.237585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.928 [2024-07-11 13:57:11.237594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.237898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.237987] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1172cb0 was disconnected and freed. reset controller. 00:27:08.929 [2024-07-11 13:57:11.238343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.929 [2024-07-11 13:57:11.238751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.929 [2024-07-11 13:57:11.238760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176e50 is same with the state(5) to be set 00:27:08.929 [2024-07-11 13:57:11.238818] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1176e50 was disconnected and freed. reset controller. 00:27:08.929 [2024-07-11 13:57:11.238951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:08.929 [2024-07-11 13:57:11.238983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2b270 (9): Bad file descriptor 00:27:08.930 [2024-07-11 13:57:11.239038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.930 [2024-07-11 13:57:11.239050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.239059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.930 [2024-07-11 13:57:11.239070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.239080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.930 [2024-07-11 13:57:11.239088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.239098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.930 [2024-07-11 13:57:11.239107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.239115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e25d0 is same with the state(5) to be set 00:27:08.930 [2024-07-11 13:57:11.239129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe797d0 (9): Bad file descriptor 00:27:08.930 [2024-07-11 13:57:11.239149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6ff60 (9): Bad file descriptor 00:27:08.930 [2024-07-11 13:57:11.239176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4a010 (9): Bad file descriptor 00:27:08.930 [2024-07-11 13:57:11.239195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfea130 (9): Bad file descriptor 00:27:08.930 [2024-07-11 13:57:11.239210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e17d0 (9): Bad file descriptor 00:27:08.930 [2024-07-11 13:57:11.239225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7bcd0 (9): Bad file descriptor 00:27:08.930 [2024-07-11 13:57:11.239245] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe8510 (9): Bad file descriptor 00:27:08.930 [2024-07-11 13:57:11.240966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.240992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.930 [2024-07-11 13:57:11.241665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.930 [2024-07-11 13:57:11.241676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.241985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.241995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.242270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.242359] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1175870 was disconnected and freed. reset controller. 00:27:08.931 [2024-07-11 13:57:11.243439] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:08.931 [2024-07-11 13:57:11.243472] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:08.931 [2024-07-11 13:57:11.243718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-11 13:57:11.243909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.931 [2024-07-11 13:57:11.243921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1db80 with addr=10.0.0.2, port=4420 00:27:08.931 [2024-07-11 13:57:11.243930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1db80 is same with the state(5) to be set 00:27:08.931 [2024-07-11 13:57:11.244264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.244278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.244292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.244300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.244311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.244319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.244330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.244338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.244348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.244356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.244367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.244375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.244386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.244394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.244406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.244414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.244425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.244434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.244447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.931 [2024-07-11 13:57:11.244456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.931 [2024-07-11 13:57:11.244466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.244985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.244993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.932 [2024-07-11 13:57:11.245259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.932 [2024-07-11 13:57:11.245269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.245277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.245287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.245296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.245306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.245315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.245325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.245333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.245343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.245351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.245361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.245369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.245381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.245389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.245399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.245407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.245417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.245425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.245435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.245444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.245453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.245462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.245470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04760 is same with the state(5) to be set 00:27:08.933 [2024-07-11 13:57:11.245527] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf04760 was disconnected and freed. reset controller. 00:27:08.933 [2024-07-11 13:57:11.245580] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:08.933 [2024-07-11 13:57:11.246865] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:08.933 [2024-07-11 13:57:11.246891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e25d0 (9): Bad file descriptor 00:27:08.933 [2024-07-11 13:57:11.247121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-11 13:57:11.247253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-11 13:57:11.247265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2b270 with addr=10.0.0.2, port=4420 00:27:08.933 [2024-07-11 13:57:11.247274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b270 is same with the state(5) to be set 00:27:08.933 [2024-07-11 13:57:11.247445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-11 13:57:11.247633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-11 13:57:11.247644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfea130 with addr=10.0.0.2, port=4420 00:27:08.933 [2024-07-11 13:57:11.247652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfea130 is same with the state(5) to be set 00:27:08.933 [2024-07-11 13:57:11.247663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1db80 (9): Bad file descriptor 00:27:08.933 [2024-07-11 13:57:11.249139] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:08.933 [2024-07-11 13:57:11.249499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:08.933 [2024-07-11 13:57:11.249519] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:08.933 [2024-07-11 13:57:11.249551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2b270 (9): Bad file descriptor 00:27:08.933 [2024-07-11 13:57:11.249568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfea130 (9): Bad file descriptor 00:27:08.933 [2024-07-11 13:57:11.249579] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.933 [2024-07-11 13:57:11.249586] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.933 [2024-07-11 13:57:11.249596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.933 [2024-07-11 13:57:11.249652] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:08.933 [2024-07-11 13:57:11.250018] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:08.933 [2024-07-11 13:57:11.250041] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.933 [2024-07-11 13:57:11.250343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-11 13:57:11.250527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-11 13:57:11.250540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e25d0 with addr=10.0.0.2, port=4420 00:27:08.933 [2024-07-11 13:57:11.250549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e25d0 is same with the state(5) to be set 00:27:08.933 [2024-07-11 13:57:11.250816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-11 13:57:11.251009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-11 13:57:11.251022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe797d0 with addr=10.0.0.2, port=4420 00:27:08.933 [2024-07-11 13:57:11.251030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe797d0 is same with the state(5) to be set 00:27:08.933 [2024-07-11 13:57:11.251169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-11 13:57:11.251360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.933 [2024-07-11 13:57:11.251371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7bcd0 with addr=10.0.0.2, port=4420 00:27:08.933 [2024-07-11 13:57:11.251380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7bcd0 is same with the state(5) to be set 00:27:08.933 [2024-07-11 13:57:11.251388] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:08.933 [2024-07-11 13:57:11.251396] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:08.933 [2024-07-11 13:57:11.251404] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:08.933 [2024-07-11 13:57:11.251418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:08.933 [2024-07-11 13:57:11.251426] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:08.933 [2024-07-11 13:57:11.251433] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:08.933 [2024-07-11 13:57:11.251738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.251751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.251764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.251773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.251784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.251800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.251810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.251818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.251830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.251837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.251848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.251856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.251866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.251874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.251885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.251893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.251903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.251911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.251922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.251931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.251941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.933 [2024-07-11 13:57:11.251949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.933 [2024-07-11 13:57:11.251958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.251967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.251977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.251985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.251995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.934 [2024-07-11 13:57:11.252691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.934 [2024-07-11 13:57:11.252699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.252941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.252950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf05d00 is same with the state(5) to be set 00:27:08.935 [2024-07-11 13:57:11.253987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.935 [2024-07-11 13:57:11.254438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.935 [2024-07-11 13:57:11.254446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.254969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.254977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1174290 is same with the state(5) to be set 00:27:08.936 [2024-07-11 13:57:11.256019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.256032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.256042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.256050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.256058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.256065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.256074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.256080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.256089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.256096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.256104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.256111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.256119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.936 [2024-07-11 13:57:11.256126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.936 [2024-07-11 13:57:11.256134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.937 [2024-07-11 13:57:11.256799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.937 [2024-07-11 13:57:11.256807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.256822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.256837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.256852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.256867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.256883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.256898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.256913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.256928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.256943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.256958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.256973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.256992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.256999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.257007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.257014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.257022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf07080 is same with the state(5) to be set 00:27:08.938 [2024-07-11 13:57:11.258774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.258991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.258998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.259006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.259013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.259021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.259028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.259036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.259043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.259052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.259058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.259066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.259073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.259081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.259088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.259097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.259103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.259112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.259118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.938 [2024-07-11 13:57:11.259129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.938 [2024-07-11 13:57:11.259136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.939 [2024-07-11 13:57:11.259759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.939 [2024-07-11 13:57:11.259767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf085a0 is same with the state(5) to be set 00:27:08.939 [2024-07-11 13:57:11.261784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.939 [2024-07-11 13:57:11.261799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.939 [2024-07-11 13:57:11.261807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:08.940 [2024-07-11 13:57:11.261816] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:08.940 [2024-07-11 13:57:11.261826] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:08.940 [2024-07-11 13:57:11.261858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e25d0 (9): Bad file descriptor 00:27:08.940 [2024-07-11 13:57:11.261868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe797d0 (9): Bad file descriptor 00:27:08.940 [2024-07-11 13:57:11.261876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7bcd0 (9): Bad file descriptor 00:27:08.940 [2024-07-11 13:57:11.261914] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:08.940 [2024-07-11 13:57:11.261929] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:08.940 [2024-07-11 13:57:11.261940] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:08.940 [2024-07-11 13:57:11.261949] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:08.940 task offset: 29184 on job bdev=Nvme1n1 fails 00:27:08.940 00:27:08.940 Latency(us) 00:27:08.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.940 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.940 Job: Nvme1n1 ended in about 0.48 seconds with error 00:27:08.940 Verification LBA range: start 0x0 length 0x400 00:27:08.940 Nvme1n1 : 0.48 436.28 27.27 134.24 0.00 111193.31 23023.08 115799.26 00:27:08.940 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.940 Job: Nvme2n1 ended in about 0.49 seconds with error 00:27:08.940 Verification LBA range: start 0x0 length 0x400 00:27:08.940 Nvme2n1 : 0.49 422.31 26.39 129.94 0.00 113336.22 32369.09 96651.35 00:27:08.940 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.940 Job: Nvme3n1 ended in about 0.50 seconds with error 00:27:08.940 Verification LBA range: start 0x0 length 0x400 00:27:08.940 Nvme3n1 : 0.50 411.93 25.75 126.75 0.00 114747.92 44450.50 94827.74 00:27:08.940 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.940 Job: Nvme4n1 ended in about 0.51 seconds with error 00:27:08.940 Verification LBA range: start 0x0 length 0x400 00:27:08.940 Nvme4n1 : 0.51 321.47 20.09 125.45 0.00 136544.83 79327.05 121270.09 00:27:08.940 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.940 Job: Nvme5n1 ended in about 0.50 seconds with error 00:27:08.940 Verification LBA range: start 0x0 length 0x400 00:27:08.940 Nvme5n1 : 0.50 418.55 26.16 128.79 0.00 109781.18 30317.52 93915.94 00:27:08.940 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.940 Job: Nvme6n1 ended in about 0.51 seconds with error 00:27:08.940 Verification LBA range: start 0x0 length 0x400 00:27:08.940 Nvme6n1 : 0.51 320.21 20.01 124.96 0.00 133439.49 84341.98 105313.50 00:27:08.940 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.940 Job: Nvme7n1 ended in about 0.50 seconds with error 00:27:08.940 Verification LBA range: start 0x0 length 0x400 00:27:08.940 Nvme7n1 : 0.50 413.61 25.85 127.26 0.00 108104.86 28265.96 93460.03 00:27:08.940 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.940 Job: Nvme8n1 ended in about 0.50 seconds with error 00:27:08.940 Verification LBA range: start 0x0 length 0x400 00:27:08.940 Nvme8n1 : 0.50 416.45 26.03 40.04 0.00 122375.86 20173.69 92092.33 00:27:08.940 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.940 Job: Nvme9n1 ended in about 0.51 seconds with error 00:27:08.940 Verification LBA range: start 0x0 length 0x400 00:27:08.940 Nvme9n1 : 0.51 318.94 19.93 124.46 0.00 128514.55 82062.47 101666.28 00:27:08.940 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.940 Job: Nvme10n1 ended in about 0.52 seconds with error 00:27:08.940 Verification LBA range: start 0x0 length 0x400 00:27:08.940 Nvme10n1 : 0.52 317.25 19.83 123.80 0.00 127552.75 80694.76 101666.28 00:27:08.940 =================================================================================================================== 00:27:08.940 Total : 3796.99 237.31 1185.70 0.00 119755.97 20173.69 121270.09 00:27:08.940 [2024-07-11 13:57:11.288870] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:08.940 [2024-07-11 13:57:11.288915] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:08.940 [2024-07-11 13:57:11.289209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.289365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.289377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4a010 with addr=10.0.0.2, port=4420 00:27:08.940 [2024-07-11 13:57:11.289386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a010 is same with the state(5) to be set 00:27:08.940 [2024-07-11 13:57:11.289623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.289760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.289770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6ff60 with addr=10.0.0.2, port=4420 00:27:08.940 [2024-07-11 13:57:11.289777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6ff60 is same with the state(5) to be set 00:27:08.940 [2024-07-11 13:57:11.289896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.290080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.290090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e17d0 with addr=10.0.0.2, port=4420 00:27:08.940 [2024-07-11 13:57:11.290102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e17d0 is same with the state(5) to be set 00:27:08.940 [2024-07-11 13:57:11.290110] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:08.940 [2024-07-11 13:57:11.290116] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:08.940 [2024-07-11 13:57:11.290123] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:08.940 [2024-07-11 13:57:11.290136] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:08.940 [2024-07-11 13:57:11.290141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:08.940 [2024-07-11 13:57:11.290147] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:08.940 [2024-07-11 13:57:11.290156] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:08.940 [2024-07-11 13:57:11.290165] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:08.940 [2024-07-11 13:57:11.290172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:08.940 [2024-07-11 13:57:11.291093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:08.940 [2024-07-11 13:57:11.291106] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:08.940 [2024-07-11 13:57:11.291116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.940 [2024-07-11 13:57:11.291124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.940 [2024-07-11 13:57:11.291131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.940 [2024-07-11 13:57:11.291137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.940 [2024-07-11 13:57:11.291476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.291717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.291729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe8510 with addr=10.0.0.2, port=4420 00:27:08.940 [2024-07-11 13:57:11.291736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe8510 is same with the state(5) to be set 00:27:08.940 [2024-07-11 13:57:11.291749] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4a010 (9): Bad file descriptor 00:27:08.940 [2024-07-11 13:57:11.291760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6ff60 (9): Bad file descriptor 00:27:08.940 [2024-07-11 13:57:11.291769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e17d0 (9): Bad file descriptor 00:27:08.940 [2024-07-11 13:57:11.291810] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:08.940 [2024-07-11 13:57:11.291821] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:08.940 [2024-07-11 13:57:11.291831] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:08.940 [2024-07-11 13:57:11.292147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.292360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.292369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfea130 with addr=10.0.0.2, port=4420 00:27:08.940 [2024-07-11 13:57:11.292376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfea130 is same with the state(5) to be set 00:27:08.940 [2024-07-11 13:57:11.292558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.292748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.292758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2b270 with addr=10.0.0.2, port=4420 00:27:08.940 [2024-07-11 13:57:11.292764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b270 is same with the state(5) to be set 00:27:08.940 [2024-07-11 13:57:11.292891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.292988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.940 [2024-07-11 13:57:11.292997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1db80 with addr=10.0.0.2, port=4420 00:27:08.940 [2024-07-11 13:57:11.293003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1db80 is same with the state(5) to be set 00:27:08.940 [2024-07-11 13:57:11.293012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe8510 (9): Bad file descriptor 00:27:08.940 [2024-07-11 13:57:11.293020] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:08.940 [2024-07-11 13:57:11.293037] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:08.940 [2024-07-11 13:57:11.293044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:08.940 [2024-07-11 13:57:11.293055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:08.940 [2024-07-11 13:57:11.293060] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:08.940 [2024-07-11 13:57:11.293067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:08.940 [2024-07-11 13:57:11.293077] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:08.940 [2024-07-11 13:57:11.293083] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:08.940 [2024-07-11 13:57:11.293089] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:08.940 [2024-07-11 13:57:11.293143] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:08.941 [2024-07-11 13:57:11.293153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:08.941 [2024-07-11 13:57:11.293167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:08.941 [2024-07-11 13:57:11.293175] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.941 [2024-07-11 13:57:11.293180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.941 [2024-07-11 13:57:11.293186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.941 [2024-07-11 13:57:11.293208] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfea130 (9): Bad file descriptor 00:27:08.941 [2024-07-11 13:57:11.293217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2b270 (9): Bad file descriptor 00:27:08.941 [2024-07-11 13:57:11.293226] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1db80 (9): Bad file descriptor 00:27:08.941 [2024-07-11 13:57:11.293234] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:08.941 [2024-07-11 13:57:11.293240] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:08.941 [2024-07-11 13:57:11.293246] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:08.941 [2024-07-11 13:57:11.293271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.941 [2024-07-11 13:57:11.293416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-11 13:57:11.293553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-11 13:57:11.293563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7bcd0 with addr=10.0.0.2, port=4420 00:27:08.941 [2024-07-11 13:57:11.293570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7bcd0 is same with the state(5) to be set 00:27:08.941 [2024-07-11 13:57:11.293762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-11 13:57:11.293875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-11 13:57:11.293884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe797d0 with addr=10.0.0.2, port=4420 00:27:08.941 [2024-07-11 13:57:11.293891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe797d0 is same with the state(5) to be set 00:27:08.941 [2024-07-11 13:57:11.294145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-11 13:57:11.294319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.941 [2024-07-11 13:57:11.294330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e25d0 with addr=10.0.0.2, port=4420 00:27:08.941 [2024-07-11 13:57:11.294337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e25d0 is same with the state(5) to be set 00:27:08.941 [2024-07-11 13:57:11.294344] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:08.941 [2024-07-11 13:57:11.294350] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:08.941 [2024-07-11 13:57:11.294356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:08.941 [2024-07-11 13:57:11.294603] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:08.941 [2024-07-11 13:57:11.294613] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:08.941 [2024-07-11 13:57:11.294620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:08.941 [2024-07-11 13:57:11.294631] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.941 [2024-07-11 13:57:11.294637] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.941 [2024-07-11 13:57:11.294644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.941 [2024-07-11 13:57:11.294675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.941 [2024-07-11 13:57:11.294682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.941 [2024-07-11 13:57:11.294688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.941 [2024-07-11 13:57:11.294697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7bcd0 (9): Bad file descriptor 00:27:08.941 [2024-07-11 13:57:11.294706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe797d0 (9): Bad file descriptor 00:27:08.941 [2024-07-11 13:57:11.294715] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e25d0 (9): Bad file descriptor 00:27:08.941 [2024-07-11 13:57:11.294740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:08.941 [2024-07-11 13:57:11.294747] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:08.941 [2024-07-11 13:57:11.294753] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:08.941 [2024-07-11 13:57:11.294762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:08.941 [2024-07-11 13:57:11.294770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:08.941 [2024-07-11 13:57:11.294777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:08.941 [2024-07-11 13:57:11.294786] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:08.941 [2024-07-11 13:57:11.294792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:08.941 [2024-07-11 13:57:11.294798] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:08.941 [2024-07-11 13:57:11.294824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.941 [2024-07-11 13:57:11.294831] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.941 [2024-07-11 13:57:11.294836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:09.229 13:57:11 -- target/shutdown.sh@135 -- # nvmfpid= 00:27:09.229 13:57:11 -- target/shutdown.sh@138 -- # sleep 1 00:27:10.609 13:57:12 -- target/shutdown.sh@141 -- # kill -9 1708954 00:27:10.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (1708954) - No such process 00:27:10.609 13:57:12 -- target/shutdown.sh@141 -- # true 00:27:10.609 13:57:12 -- target/shutdown.sh@143 -- # stoptarget 00:27:10.609 13:57:12 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:10.609 13:57:12 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:10.609 13:57:12 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:10.609 13:57:12 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:10.609 13:57:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:10.609 13:57:12 -- nvmf/common.sh@116 -- # sync 00:27:10.609 13:57:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:10.609 13:57:12 -- nvmf/common.sh@119 -- # set +e 00:27:10.609 13:57:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:10.609 13:57:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:10.609 rmmod nvme_tcp 00:27:10.609 rmmod nvme_fabrics 00:27:10.609 rmmod nvme_keyring 00:27:10.609 13:57:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:10.609 13:57:12 -- nvmf/common.sh@123 -- # set -e 00:27:10.609 13:57:12 -- nvmf/common.sh@124 -- # return 0 00:27:10.609 13:57:12 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:27:10.609 13:57:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:10.609 13:57:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:10.609 13:57:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:10.609 13:57:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.609 13:57:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:10.609 13:57:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.609 13:57:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.609 13:57:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.516 13:57:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:12.516 00:27:12.516 real 0m7.246s 00:27:12.516 user 0m16.832s 00:27:12.516 sys 0m1.204s 00:27:12.516 13:57:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.516 13:57:14 -- common/autotest_common.sh@10 -- # set +x 00:27:12.516 ************************************ 00:27:12.516 END TEST nvmf_shutdown_tc3 00:27:12.516 ************************************ 00:27:12.516 13:57:14 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:27:12.516 00:27:12.516 real 0m29.934s 00:27:12.516 user 1m14.728s 00:27:12.516 sys 0m7.962s 00:27:12.516 13:57:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.516 13:57:14 -- common/autotest_common.sh@10 -- # set +x 00:27:12.516 ************************************ 00:27:12.516 END TEST nvmf_shutdown 00:27:12.516 ************************************ 00:27:12.516 13:57:14 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:12.516 13:57:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:12.516 13:57:14 -- common/autotest_common.sh@10 -- # set +x 00:27:12.516 13:57:14 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:12.516 13:57:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:12.516 13:57:14 -- common/autotest_common.sh@10 -- # set +x 00:27:12.516 13:57:14 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:12.516 13:57:14 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:12.516 13:57:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:12.516 13:57:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:12.516 13:57:14 -- common/autotest_common.sh@10 -- # set +x 00:27:12.516 ************************************ 00:27:12.516 START TEST nvmf_multicontroller 00:27:12.516 ************************************ 00:27:12.516 13:57:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:12.516 * Looking for test storage... 00:27:12.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:12.516 13:57:14 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.516 13:57:14 -- nvmf/common.sh@7 -- # uname -s 00:27:12.516 13:57:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.516 13:57:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.516 13:57:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.516 13:57:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.516 13:57:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.516 13:57:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.516 13:57:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.516 13:57:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.516 13:57:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.516 13:57:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.516 13:57:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:12.516 13:57:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:12.516 13:57:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.516 13:57:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.516 13:57:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.516 13:57:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.516 13:57:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.517 13:57:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.517 13:57:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.517 13:57:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.517 13:57:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.517 13:57:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.517 13:57:14 -- paths/export.sh@5 -- # export PATH 00:27:12.517 13:57:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.517 13:57:14 -- nvmf/common.sh@46 -- # : 0 00:27:12.517 13:57:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:12.517 13:57:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:12.517 13:57:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:12.517 13:57:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.517 13:57:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.517 13:57:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:12.517 13:57:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:12.776 13:57:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:12.776 13:57:14 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:12.776 13:57:14 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:12.776 13:57:14 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:12.776 13:57:14 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:12.776 13:57:14 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:12.776 13:57:14 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:12.776 13:57:14 -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:12.776 13:57:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:12.776 13:57:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.776 13:57:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:12.776 13:57:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:12.776 13:57:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:12.776 13:57:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.776 13:57:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.776 13:57:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.776 13:57:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:12.776 13:57:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:12.776 13:57:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:12.776 13:57:14 -- common/autotest_common.sh@10 -- # set +x 00:27:18.048 13:57:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:18.048 13:57:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:18.048 13:57:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:18.048 13:57:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:18.048 13:57:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:18.048 13:57:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:18.048 13:57:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:18.048 13:57:19 -- nvmf/common.sh@294 -- # net_devs=() 00:27:18.048 13:57:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:18.048 13:57:19 -- nvmf/common.sh@295 -- # e810=() 00:27:18.048 13:57:19 -- nvmf/common.sh@295 -- # local -ga e810 00:27:18.048 13:57:19 -- nvmf/common.sh@296 -- # x722=() 00:27:18.048 13:57:19 -- nvmf/common.sh@296 -- # local -ga x722 00:27:18.048 13:57:19 -- nvmf/common.sh@297 -- # mlx=() 00:27:18.048 13:57:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:18.048 13:57:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.048 13:57:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.048 13:57:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.048 13:57:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.048 13:57:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.048 13:57:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.048 13:57:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.048 13:57:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.048 13:57:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.048 13:57:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.048 13:57:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.048 13:57:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:18.048 13:57:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:18.048 13:57:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:18.049 13:57:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:18.049 13:57:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:18.049 13:57:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:18.049 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:18.049 13:57:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:18.049 13:57:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:18.049 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:18.049 13:57:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:18.049 13:57:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:18.049 13:57:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.049 13:57:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:18.049 13:57:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.049 13:57:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:18.049 Found net devices under 0000:86:00.0: cvl_0_0 00:27:18.049 13:57:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.049 13:57:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:18.049 13:57:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.049 13:57:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:18.049 13:57:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.049 13:57:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:18.049 Found net devices under 0000:86:00.1: cvl_0_1 00:27:18.049 13:57:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.049 13:57:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:18.049 13:57:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:18.049 13:57:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:18.049 13:57:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:18.049 13:57:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.049 13:57:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.049 13:57:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.049 13:57:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:18.049 13:57:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.049 13:57:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.049 13:57:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:18.049 13:57:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.049 13:57:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.049 13:57:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:18.049 13:57:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:18.049 13:57:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.049 13:57:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.049 13:57:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.049 13:57:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.049 13:57:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:18.049 13:57:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.049 13:57:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.049 13:57:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.049 13:57:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:18.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:27:18.049 00:27:18.049 --- 10.0.0.2 ping statistics --- 00:27:18.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.049 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:27:18.049 13:57:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:27:18.049 00:27:18.049 --- 10.0.0.1 ping statistics --- 00:27:18.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.049 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:27:18.049 13:57:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.049 13:57:20 -- nvmf/common.sh@410 -- # return 0 00:27:18.049 13:57:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:18.049 13:57:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.049 13:57:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:18.049 13:57:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:18.049 13:57:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.049 13:57:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:18.049 13:57:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:18.049 13:57:20 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:18.049 13:57:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:18.049 13:57:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:18.049 13:57:20 -- common/autotest_common.sh@10 -- # set +x 00:27:18.049 13:57:20 -- nvmf/common.sh@469 -- # nvmfpid=1712921 00:27:18.049 13:57:20 -- nvmf/common.sh@470 -- # waitforlisten 1712921 00:27:18.049 13:57:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:18.049 13:57:20 -- common/autotest_common.sh@819 -- # '[' -z 1712921 ']' 00:27:18.049 13:57:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.049 13:57:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:18.049 13:57:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.049 13:57:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:18.049 13:57:20 -- common/autotest_common.sh@10 -- # set +x 00:27:18.049 [2024-07-11 13:57:20.181783] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:18.049 [2024-07-11 13:57:20.181823] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.049 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.049 [2024-07-11 13:57:20.248979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:18.049 [2024-07-11 13:57:20.288492] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:18.049 [2024-07-11 13:57:20.288605] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.049 [2024-07-11 13:57:20.288613] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.049 [2024-07-11 13:57:20.288619] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.049 [2024-07-11 13:57:20.288719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.049 [2024-07-11 13:57:20.288813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.049 [2024-07-11 13:57:20.288814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.617 13:57:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:18.617 13:57:21 -- common/autotest_common.sh@852 -- # return 0 00:27:18.617 13:57:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:18.617 13:57:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:18.617 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 13:57:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.617 13:57:21 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:18.617 13:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.617 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:18.617 [2024-07-11 13:57:21.041151] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.617 13:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.617 13:57:21 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:18.617 13:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.617 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:18.876 Malloc0 00:27:18.876 13:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.876 13:57:21 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:18.876 13:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.876 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:18.876 13:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.876 13:57:21 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:18.876 13:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.876 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:18.876 13:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.876 13:57:21 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.876 13:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.876 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:18.876 [2024-07-11 13:57:21.099357] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.876 13:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.876 13:57:21 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:18.876 13:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.876 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:18.876 [2024-07-11 13:57:21.107306] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:18.876 13:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.876 13:57:21 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:18.876 13:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.876 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:18.876 Malloc1 00:27:18.876 13:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.876 13:57:21 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:18.876 13:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.876 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:18.876 13:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.876 13:57:21 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:18.876 13:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.876 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:18.876 13:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.877 13:57:21 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:18.877 13:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.877 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:18.877 13:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.877 13:57:21 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:18.877 13:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:18.877 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:18.877 13:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:18.877 13:57:21 -- host/multicontroller.sh@44 -- # bdevperf_pid=1713172 00:27:18.877 13:57:21 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:18.877 13:57:21 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:18.877 13:57:21 -- host/multicontroller.sh@47 -- # waitforlisten 1713172 /var/tmp/bdevperf.sock 00:27:18.877 13:57:21 -- common/autotest_common.sh@819 -- # '[' -z 1713172 ']' 00:27:18.877 13:57:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:18.877 13:57:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:18.877 13:57:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:18.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:18.877 13:57:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:18.877 13:57:21 -- common/autotest_common.sh@10 -- # set +x 00:27:19.811 13:57:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:19.811 13:57:22 -- common/autotest_common.sh@852 -- # return 0 00:27:19.811 13:57:22 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:19.811 13:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.811 13:57:22 -- common/autotest_common.sh@10 -- # set +x 00:27:19.811 NVMe0n1 00:27:19.811 13:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.811 13:57:22 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:19.811 13:57:22 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:19.811 13:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.811 13:57:22 -- common/autotest_common.sh@10 -- # set +x 00:27:19.811 13:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:19.811 1 00:27:19.811 13:57:22 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:19.811 13:57:22 -- common/autotest_common.sh@640 -- # local es=0 00:27:19.811 13:57:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:19.811 13:57:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:19.811 13:57:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:19.811 13:57:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:19.811 13:57:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:19.811 13:57:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:19.811 13:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.811 13:57:22 -- common/autotest_common.sh@10 -- # set +x 00:27:19.811 request: 00:27:19.811 { 00:27:19.811 "name": "NVMe0", 00:27:19.811 "trtype": "tcp", 00:27:19.811 "traddr": "10.0.0.2", 00:27:19.811 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:19.811 "hostaddr": "10.0.0.2", 00:27:19.811 "hostsvcid": "60000", 00:27:19.811 "adrfam": "ipv4", 00:27:19.811 "trsvcid": "4420", 00:27:19.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:19.811 "method": "bdev_nvme_attach_controller", 00:27:19.811 "req_id": 1 00:27:19.811 } 00:27:19.811 Got JSON-RPC error response 00:27:19.811 response: 00:27:19.811 { 00:27:19.811 "code": -114, 00:27:19.811 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:19.811 } 00:27:19.811 13:57:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:19.811 13:57:22 -- common/autotest_common.sh@643 -- # es=1 00:27:19.811 13:57:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:19.811 13:57:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:19.811 13:57:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:19.811 13:57:22 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:19.811 13:57:22 -- common/autotest_common.sh@640 -- # local es=0 00:27:19.811 13:57:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:19.811 13:57:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:19.811 13:57:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:19.811 13:57:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:19.812 13:57:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:19.812 13:57:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:19.812 13:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:19.812 13:57:22 -- common/autotest_common.sh@10 -- # set +x 00:27:20.070 request: 00:27:20.070 { 00:27:20.070 "name": "NVMe0", 00:27:20.070 "trtype": "tcp", 00:27:20.070 "traddr": "10.0.0.2", 00:27:20.070 "hostaddr": "10.0.0.2", 00:27:20.070 "hostsvcid": "60000", 00:27:20.070 "adrfam": "ipv4", 00:27:20.070 "trsvcid": "4420", 00:27:20.070 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:20.070 "method": "bdev_nvme_attach_controller", 00:27:20.070 "req_id": 1 00:27:20.070 } 00:27:20.070 Got JSON-RPC error response 00:27:20.070 response: 00:27:20.070 { 00:27:20.070 "code": -114, 00:27:20.070 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:20.070 } 00:27:20.070 13:57:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:20.070 13:57:22 -- common/autotest_common.sh@643 -- # es=1 00:27:20.070 13:57:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:20.070 13:57:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:20.070 13:57:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:20.070 13:57:22 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:20.070 13:57:22 -- common/autotest_common.sh@640 -- # local es=0 00:27:20.070 13:57:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:20.070 13:57:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:20.070 13:57:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.070 13:57:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:20.070 13:57:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.070 13:57:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:20.070 13:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.070 13:57:22 -- common/autotest_common.sh@10 -- # set +x 00:27:20.070 request: 00:27:20.070 { 00:27:20.070 "name": "NVMe0", 00:27:20.070 "trtype": "tcp", 00:27:20.070 "traddr": "10.0.0.2", 00:27:20.070 "hostaddr": "10.0.0.2", 00:27:20.070 "hostsvcid": "60000", 00:27:20.070 "adrfam": "ipv4", 00:27:20.070 "trsvcid": "4420", 00:27:20.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:20.070 "multipath": "disable", 00:27:20.070 "method": "bdev_nvme_attach_controller", 00:27:20.070 "req_id": 1 00:27:20.070 } 00:27:20.070 Got JSON-RPC error response 00:27:20.070 response: 00:27:20.070 { 00:27:20.070 "code": -114, 00:27:20.070 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:20.070 } 00:27:20.070 13:57:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:20.070 13:57:22 -- common/autotest_common.sh@643 -- # es=1 00:27:20.070 13:57:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:20.070 13:57:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:20.070 13:57:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:20.070 13:57:22 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:20.070 13:57:22 -- common/autotest_common.sh@640 -- # local es=0 00:27:20.070 13:57:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:20.070 13:57:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:20.070 13:57:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.070 13:57:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:20.070 13:57:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.070 13:57:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:20.070 13:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.070 13:57:22 -- common/autotest_common.sh@10 -- # set +x 00:27:20.070 request: 00:27:20.070 { 00:27:20.070 "name": "NVMe0", 00:27:20.070 "trtype": "tcp", 00:27:20.070 "traddr": "10.0.0.2", 00:27:20.070 "hostaddr": "10.0.0.2", 00:27:20.070 "hostsvcid": "60000", 00:27:20.070 "adrfam": "ipv4", 00:27:20.070 "trsvcid": "4420", 00:27:20.070 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:20.070 "multipath": "failover", 00:27:20.070 "method": "bdev_nvme_attach_controller", 00:27:20.070 "req_id": 1 00:27:20.070 } 00:27:20.070 Got JSON-RPC error response 00:27:20.070 response: 00:27:20.070 { 00:27:20.070 "code": -114, 00:27:20.070 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:20.070 } 00:27:20.070 13:57:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:20.070 13:57:22 -- common/autotest_common.sh@643 -- # es=1 00:27:20.070 13:57:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:20.070 13:57:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:20.070 13:57:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:20.070 13:57:22 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:20.070 13:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.070 13:57:22 -- common/autotest_common.sh@10 -- # set +x 00:27:20.070 00:27:20.070 13:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.070 13:57:22 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:20.070 13:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.070 13:57:22 -- common/autotest_common.sh@10 -- # set +x 00:27:20.070 13:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.070 13:57:22 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:20.070 13:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.070 13:57:22 -- common/autotest_common.sh@10 -- # set +x 00:27:20.329 00:27:20.329 13:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.329 13:57:22 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:20.329 13:57:22 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:20.329 13:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:20.329 13:57:22 -- common/autotest_common.sh@10 -- # set +x 00:27:20.329 13:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:20.329 13:57:22 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:20.329 13:57:22 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:21.711 0 00:27:21.711 13:57:23 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:21.711 13:57:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:21.711 13:57:23 -- common/autotest_common.sh@10 -- # set +x 00:27:21.711 13:57:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:21.711 13:57:23 -- host/multicontroller.sh@100 -- # killprocess 1713172 00:27:21.711 13:57:23 -- common/autotest_common.sh@926 -- # '[' -z 1713172 ']' 00:27:21.711 13:57:23 -- common/autotest_common.sh@930 -- # kill -0 1713172 00:27:21.711 13:57:23 -- common/autotest_common.sh@931 -- # uname 00:27:21.711 13:57:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:21.711 13:57:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1713172 00:27:21.711 13:57:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:21.711 13:57:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:21.711 13:57:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1713172' 00:27:21.711 killing process with pid 1713172 00:27:21.711 13:57:23 -- common/autotest_common.sh@945 -- # kill 1713172 00:27:21.711 13:57:23 -- common/autotest_common.sh@950 -- # wait 1713172 00:27:21.711 13:57:24 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:21.711 13:57:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:21.711 13:57:24 -- common/autotest_common.sh@10 -- # set +x 00:27:21.711 13:57:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:21.711 13:57:24 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:21.711 13:57:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:21.711 13:57:24 -- common/autotest_common.sh@10 -- # set +x 00:27:21.711 13:57:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:21.711 13:57:24 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:21.711 13:57:24 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:21.711 13:57:24 -- common/autotest_common.sh@1597 -- # read -r file 00:27:21.711 13:57:24 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:21.711 13:57:24 -- common/autotest_common.sh@1596 -- # sort -u 00:27:21.711 13:57:24 -- common/autotest_common.sh@1598 -- # cat 00:27:21.711 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:21.711 [2024-07-11 13:57:21.206464] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:21.711 [2024-07-11 13:57:21.206513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713172 ] 00:27:21.711 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.711 [2024-07-11 13:57:21.259937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.711 [2024-07-11 13:57:21.299061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.711 [2024-07-11 13:57:22.735614] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 53f66563-ce3d-4f57-b3c7-7a75ff5c5cd7 already exists 00:27:21.711 [2024-07-11 13:57:22.735643] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:53f66563-ce3d-4f57-b3c7-7a75ff5c5cd7 alias for bdev NVMe1n1 00:27:21.711 [2024-07-11 13:57:22.735652] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:21.711 Running I/O for 1 seconds... 00:27:21.711 00:27:21.711 Latency(us) 00:27:21.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.711 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:21.711 NVMe0n1 : 1.00 25009.40 97.69 0.00 0.00 5107.18 1460.31 8719.14 00:27:21.711 =================================================================================================================== 00:27:21.711 Total : 25009.40 97.69 0.00 0.00 5107.18 1460.31 8719.14 00:27:21.711 Received shutdown signal, test time was about 1.000000 seconds 00:27:21.711 00:27:21.711 Latency(us) 00:27:21.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.711 =================================================================================================================== 00:27:21.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:21.711 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:21.711 13:57:24 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:21.711 13:57:24 -- common/autotest_common.sh@1597 -- # read -r file 00:27:21.711 13:57:24 -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:21.711 13:57:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:21.711 13:57:24 -- nvmf/common.sh@116 -- # sync 00:27:21.711 13:57:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:21.711 13:57:24 -- nvmf/common.sh@119 -- # set +e 00:27:21.711 13:57:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:21.711 13:57:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:21.711 rmmod nvme_tcp 00:27:21.711 rmmod nvme_fabrics 00:27:21.970 rmmod nvme_keyring 00:27:21.970 13:57:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:21.970 13:57:24 -- nvmf/common.sh@123 -- # set -e 00:27:21.970 13:57:24 -- nvmf/common.sh@124 -- # return 0 00:27:21.970 13:57:24 -- nvmf/common.sh@477 -- # '[' -n 1712921 ']' 00:27:21.970 13:57:24 -- nvmf/common.sh@478 -- # killprocess 1712921 00:27:21.970 13:57:24 -- common/autotest_common.sh@926 -- # '[' -z 1712921 ']' 00:27:21.970 13:57:24 -- common/autotest_common.sh@930 -- # kill -0 1712921 00:27:21.970 13:57:24 -- common/autotest_common.sh@931 -- # uname 00:27:21.970 13:57:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:21.970 13:57:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1712921 00:27:21.970 13:57:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:21.970 13:57:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:21.970 13:57:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1712921' 00:27:21.970 killing process with pid 1712921 00:27:21.970 13:57:24 -- common/autotest_common.sh@945 -- # kill 1712921 00:27:21.970 13:57:24 -- common/autotest_common.sh@950 -- # wait 1712921 00:27:22.229 13:57:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:22.229 13:57:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:22.229 13:57:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:22.229 13:57:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:22.229 13:57:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:22.229 13:57:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.229 13:57:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.229 13:57:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.134 13:57:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:24.134 00:27:24.134 real 0m11.666s 00:27:24.134 user 0m17.174s 00:27:24.134 sys 0m4.569s 00:27:24.134 13:57:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.134 13:57:26 -- common/autotest_common.sh@10 -- # set +x 00:27:24.134 ************************************ 00:27:24.134 END TEST nvmf_multicontroller 00:27:24.134 ************************************ 00:27:24.134 13:57:26 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:24.134 13:57:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:24.134 13:57:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:24.134 13:57:26 -- common/autotest_common.sh@10 -- # set +x 00:27:24.134 ************************************ 00:27:24.134 START TEST nvmf_aer 00:27:24.134 ************************************ 00:27:24.134 13:57:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:24.394 * Looking for test storage... 00:27:24.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:24.394 13:57:26 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.394 13:57:26 -- nvmf/common.sh@7 -- # uname -s 00:27:24.394 13:57:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.394 13:57:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.394 13:57:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.394 13:57:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.394 13:57:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.394 13:57:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.394 13:57:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.394 13:57:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.394 13:57:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.394 13:57:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.394 13:57:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:24.394 13:57:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:24.394 13:57:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.394 13:57:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.394 13:57:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.394 13:57:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.394 13:57:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.394 13:57:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.394 13:57:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.394 13:57:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.394 13:57:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.394 13:57:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.394 13:57:26 -- paths/export.sh@5 -- # export PATH 00:27:24.394 13:57:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.394 13:57:26 -- nvmf/common.sh@46 -- # : 0 00:27:24.394 13:57:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:24.394 13:57:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:24.394 13:57:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:24.394 13:57:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.395 13:57:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.395 13:57:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:24.395 13:57:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:24.395 13:57:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:24.395 13:57:26 -- host/aer.sh@11 -- # nvmftestinit 00:27:24.395 13:57:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:24.395 13:57:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.395 13:57:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:24.395 13:57:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:24.395 13:57:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:24.395 13:57:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.395 13:57:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.395 13:57:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.395 13:57:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:24.395 13:57:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:24.395 13:57:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:24.395 13:57:26 -- common/autotest_common.sh@10 -- # set +x 00:27:29.672 13:57:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:29.672 13:57:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:29.672 13:57:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:29.672 13:57:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:29.672 13:57:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:29.672 13:57:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:29.672 13:57:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:29.672 13:57:31 -- nvmf/common.sh@294 -- # net_devs=() 00:27:29.672 13:57:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:29.672 13:57:31 -- nvmf/common.sh@295 -- # e810=() 00:27:29.672 13:57:31 -- nvmf/common.sh@295 -- # local -ga e810 00:27:29.672 13:57:31 -- nvmf/common.sh@296 -- # x722=() 00:27:29.672 13:57:31 -- nvmf/common.sh@296 -- # local -ga x722 00:27:29.672 13:57:31 -- nvmf/common.sh@297 -- # mlx=() 00:27:29.672 13:57:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:29.672 13:57:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.673 13:57:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.673 13:57:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.673 13:57:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.673 13:57:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.673 13:57:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.673 13:57:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.673 13:57:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.673 13:57:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.673 13:57:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.673 13:57:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.673 13:57:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:29.673 13:57:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:29.673 13:57:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:29.673 13:57:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:29.673 13:57:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:29.673 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:29.673 13:57:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:29.673 13:57:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:29.673 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:29.673 13:57:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:29.673 13:57:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:29.673 13:57:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.673 13:57:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:29.673 13:57:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.673 13:57:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:29.673 Found net devices under 0000:86:00.0: cvl_0_0 00:27:29.673 13:57:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.673 13:57:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:29.673 13:57:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.673 13:57:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:29.673 13:57:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.673 13:57:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:29.673 Found net devices under 0000:86:00.1: cvl_0_1 00:27:29.673 13:57:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.673 13:57:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:29.673 13:57:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:29.673 13:57:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:29.673 13:57:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.673 13:57:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.673 13:57:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.673 13:57:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:29.673 13:57:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.673 13:57:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.673 13:57:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:29.673 13:57:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.673 13:57:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.673 13:57:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:29.673 13:57:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:29.673 13:57:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.673 13:57:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.673 13:57:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.673 13:57:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.673 13:57:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:29.673 13:57:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.673 13:57:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.673 13:57:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.673 13:57:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:29.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:27:29.673 00:27:29.673 --- 10.0.0.2 ping statistics --- 00:27:29.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.673 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:27:29.673 13:57:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:27:29.673 00:27:29.673 --- 10.0.0.1 ping statistics --- 00:27:29.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.673 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:27:29.673 13:57:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.673 13:57:31 -- nvmf/common.sh@410 -- # return 0 00:27:29.673 13:57:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:29.673 13:57:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.673 13:57:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:29.673 13:57:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.673 13:57:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:29.673 13:57:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:29.673 13:57:31 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:29.673 13:57:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:29.673 13:57:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:29.673 13:57:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.673 13:57:31 -- nvmf/common.sh@469 -- # nvmfpid=1717041 00:27:29.673 13:57:31 -- nvmf/common.sh@470 -- # waitforlisten 1717041 00:27:29.673 13:57:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:29.673 13:57:31 -- common/autotest_common.sh@819 -- # '[' -z 1717041 ']' 00:27:29.673 13:57:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.673 13:57:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:29.673 13:57:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.673 13:57:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:29.673 13:57:31 -- common/autotest_common.sh@10 -- # set +x 00:27:29.673 [2024-07-11 13:57:31.793650] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:29.673 [2024-07-11 13:57:31.793693] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.673 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.673 [2024-07-11 13:57:31.850528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.673 [2024-07-11 13:57:31.890981] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:29.673 [2024-07-11 13:57:31.891108] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.673 [2024-07-11 13:57:31.891116] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.673 [2024-07-11 13:57:31.891124] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.673 [2024-07-11 13:57:31.891166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.673 [2024-07-11 13:57:31.891187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.673 [2024-07-11 13:57:31.891275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.673 [2024-07-11 13:57:31.891276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.243 13:57:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:30.243 13:57:32 -- common/autotest_common.sh@852 -- # return 0 00:27:30.243 13:57:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:30.243 13:57:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:30.243 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:27:30.243 13:57:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.243 13:57:32 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.243 13:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.243 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:27:30.243 [2024-07-11 13:57:32.633560] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.243 13:57:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.243 13:57:32 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:30.243 13:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.243 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:27:30.243 Malloc0 00:27:30.243 13:57:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.243 13:57:32 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:30.243 13:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.243 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:27:30.243 13:57:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.243 13:57:32 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:30.243 13:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.243 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:27:30.243 13:57:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.243 13:57:32 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:30.243 13:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.243 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:27:30.243 [2024-07-11 13:57:32.685387] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.243 13:57:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.243 13:57:32 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:30.243 13:57:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.243 13:57:32 -- common/autotest_common.sh@10 -- # set +x 00:27:30.243 [2024-07-11 13:57:32.693189] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:30.243 [ 00:27:30.243 { 00:27:30.243 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:30.520 "subtype": "Discovery", 00:27:30.520 "listen_addresses": [], 00:27:30.520 "allow_any_host": true, 00:27:30.520 "hosts": [] 00:27:30.520 }, 00:27:30.520 { 00:27:30.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.520 "subtype": "NVMe", 00:27:30.520 "listen_addresses": [ 00:27:30.520 { 00:27:30.520 "transport": "TCP", 00:27:30.520 "trtype": "TCP", 00:27:30.520 "adrfam": "IPv4", 00:27:30.520 "traddr": "10.0.0.2", 00:27:30.520 "trsvcid": "4420" 00:27:30.520 } 00:27:30.520 ], 00:27:30.520 "allow_any_host": true, 00:27:30.520 "hosts": [], 00:27:30.520 "serial_number": "SPDK00000000000001", 00:27:30.520 "model_number": "SPDK bdev Controller", 00:27:30.520 "max_namespaces": 2, 00:27:30.520 "min_cntlid": 1, 00:27:30.520 "max_cntlid": 65519, 00:27:30.520 "namespaces": [ 00:27:30.520 { 00:27:30.520 "nsid": 1, 00:27:30.520 "bdev_name": "Malloc0", 00:27:30.520 "name": "Malloc0", 00:27:30.520 "nguid": "269BF92BCF9444D1A1029FBE28ABA843", 00:27:30.520 "uuid": "269bf92b-cf94-44d1-a102-9fbe28aba843" 00:27:30.520 } 00:27:30.520 ] 00:27:30.520 } 00:27:30.520 ] 00:27:30.520 13:57:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.520 13:57:32 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:30.520 13:57:32 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:30.520 13:57:32 -- host/aer.sh@33 -- # aerpid=1717226 00:27:30.520 13:57:32 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:30.520 13:57:32 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:30.520 13:57:32 -- common/autotest_common.sh@1244 -- # local i=0 00:27:30.520 13:57:32 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.520 13:57:32 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:27:30.520 13:57:32 -- common/autotest_common.sh@1247 -- # i=1 00:27:30.520 13:57:32 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:30.520 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.520 13:57:32 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.520 13:57:32 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:27:30.520 13:57:32 -- common/autotest_common.sh@1247 -- # i=2 00:27:30.520 13:57:32 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:30.520 13:57:32 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.520 13:57:32 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:27:30.520 13:57:32 -- common/autotest_common.sh@1247 -- # i=3 00:27:30.520 13:57:32 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:30.779 13:57:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.779 13:57:33 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.779 13:57:33 -- common/autotest_common.sh@1255 -- # return 0 00:27:30.779 13:57:33 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:30.779 13:57:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.779 13:57:33 -- common/autotest_common.sh@10 -- # set +x 00:27:30.779 Malloc1 00:27:30.779 13:57:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.779 13:57:33 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:30.779 13:57:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.779 13:57:33 -- common/autotest_common.sh@10 -- # set +x 00:27:30.779 13:57:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.779 13:57:33 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:30.779 13:57:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.779 13:57:33 -- common/autotest_common.sh@10 -- # set +x 00:27:30.779 [ 00:27:30.779 { 00:27:30.779 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:30.779 "subtype": "Discovery", 00:27:30.779 "listen_addresses": [], 00:27:30.779 "allow_any_host": true, 00:27:30.779 "hosts": [] 00:27:30.779 }, 00:27:30.779 { 00:27:30.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.779 "subtype": "NVMe", 00:27:30.779 "listen_addresses": [ 00:27:30.779 { 00:27:30.779 "transport": "TCP", 00:27:30.779 "trtype": "TCP", 00:27:30.779 "adrfam": "IPv4", 00:27:30.779 "traddr": "10.0.0.2", 00:27:30.779 "trsvcid": "4420" 00:27:30.779 } 00:27:30.779 ], 00:27:30.779 "allow_any_host": true, 00:27:30.779 "hosts": [], 00:27:30.779 "serial_number": "SPDK00000000000001", 00:27:30.779 "model_number": "SPDK bdev Controller", 00:27:30.779 "max_namespaces": 2, 00:27:30.779 "min_cntlid": 1, 00:27:30.779 "max_cntlid": 65519, 00:27:30.779 "namespaces": [ 00:27:30.779 { 00:27:30.779 "nsid": 1, 00:27:30.779 "bdev_name": "Malloc0", 00:27:30.779 "name": "Malloc0", 00:27:30.779 "nguid": "269BF92BCF9444D1A1029FBE28ABA843", 00:27:30.779 "uuid": "269bf92b-cf94-44d1-a102-9fbe28aba843" 00:27:30.779 }, 00:27:30.779 { 00:27:30.779 Asynchronous Event Request test 00:27:30.779 Attaching to 10.0.0.2 00:27:30.779 Attached to 10.0.0.2 00:27:30.779 Registering asynchronous event callbacks... 00:27:30.779 Starting namespace attribute notice tests for all controllers... 00:27:30.779 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:30.779 aer_cb - Changed Namespace 00:27:30.779 Cleaning up... 00:27:30.779 "nsid": 2, 00:27:30.779 "bdev_name": "Malloc1", 00:27:30.779 "name": "Malloc1", 00:27:30.779 "nguid": "23C02B7C9F564B3EA6C3D89156098744", 00:27:30.779 "uuid": "23c02b7c-9f56-4b3e-a6c3-d89156098744" 00:27:30.779 } 00:27:30.779 ] 00:27:30.779 } 00:27:30.779 ] 00:27:30.779 13:57:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.779 13:57:33 -- host/aer.sh@43 -- # wait 1717226 00:27:30.779 13:57:33 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:30.779 13:57:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.779 13:57:33 -- common/autotest_common.sh@10 -- # set +x 00:27:30.779 13:57:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.779 13:57:33 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:30.779 13:57:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.779 13:57:33 -- common/autotest_common.sh@10 -- # set +x 00:27:30.779 13:57:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.779 13:57:33 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:30.779 13:57:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.779 13:57:33 -- common/autotest_common.sh@10 -- # set +x 00:27:30.779 13:57:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.779 13:57:33 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:30.779 13:57:33 -- host/aer.sh@51 -- # nvmftestfini 00:27:30.779 13:57:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:30.779 13:57:33 -- nvmf/common.sh@116 -- # sync 00:27:30.779 13:57:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:30.779 13:57:33 -- nvmf/common.sh@119 -- # set +e 00:27:30.779 13:57:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:30.779 13:57:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:30.779 rmmod nvme_tcp 00:27:30.779 rmmod nvme_fabrics 00:27:30.779 rmmod nvme_keyring 00:27:30.779 13:57:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:30.779 13:57:33 -- nvmf/common.sh@123 -- # set -e 00:27:30.779 13:57:33 -- nvmf/common.sh@124 -- # return 0 00:27:30.779 13:57:33 -- nvmf/common.sh@477 -- # '[' -n 1717041 ']' 00:27:30.779 13:57:33 -- nvmf/common.sh@478 -- # killprocess 1717041 00:27:30.779 13:57:33 -- common/autotest_common.sh@926 -- # '[' -z 1717041 ']' 00:27:30.779 13:57:33 -- common/autotest_common.sh@930 -- # kill -0 1717041 00:27:30.779 13:57:33 -- common/autotest_common.sh@931 -- # uname 00:27:30.779 13:57:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:30.779 13:57:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1717041 00:27:31.073 13:57:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:31.073 13:57:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:31.073 13:57:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1717041' 00:27:31.073 killing process with pid 1717041 00:27:31.073 13:57:33 -- common/autotest_common.sh@945 -- # kill 1717041 00:27:31.073 [2024-07-11 13:57:33.253316] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:31.073 13:57:33 -- common/autotest_common.sh@950 -- # wait 1717041 00:27:31.073 13:57:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:31.073 13:57:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:31.073 13:57:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:31.073 13:57:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.073 13:57:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:31.073 13:57:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.073 13:57:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.073 13:57:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.608 13:57:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:33.608 00:27:33.608 real 0m8.915s 00:27:33.608 user 0m7.610s 00:27:33.608 sys 0m4.193s 00:27:33.608 13:57:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.608 13:57:35 -- common/autotest_common.sh@10 -- # set +x 00:27:33.608 ************************************ 00:27:33.608 END TEST nvmf_aer 00:27:33.608 ************************************ 00:27:33.608 13:57:35 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:33.608 13:57:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:33.608 13:57:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:33.608 13:57:35 -- common/autotest_common.sh@10 -- # set +x 00:27:33.608 ************************************ 00:27:33.608 START TEST nvmf_async_init 00:27:33.608 ************************************ 00:27:33.608 13:57:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:33.608 * Looking for test storage... 00:27:33.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:33.608 13:57:35 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.608 13:57:35 -- nvmf/common.sh@7 -- # uname -s 00:27:33.608 13:57:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.608 13:57:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.608 13:57:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.608 13:57:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.608 13:57:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.608 13:57:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.608 13:57:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.608 13:57:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.608 13:57:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.608 13:57:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.608 13:57:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.608 13:57:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.608 13:57:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.608 13:57:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.608 13:57:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.608 13:57:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.608 13:57:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.608 13:57:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.608 13:57:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.608 13:57:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.608 13:57:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.608 13:57:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.608 13:57:35 -- paths/export.sh@5 -- # export PATH 00:27:33.608 13:57:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.608 13:57:35 -- nvmf/common.sh@46 -- # : 0 00:27:33.608 13:57:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:33.608 13:57:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:33.608 13:57:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:33.608 13:57:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.608 13:57:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.608 13:57:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:33.608 13:57:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:33.608 13:57:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:33.608 13:57:35 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:33.608 13:57:35 -- host/async_init.sh@14 -- # null_block_size=512 00:27:33.608 13:57:35 -- host/async_init.sh@15 -- # null_bdev=null0 00:27:33.608 13:57:35 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:33.608 13:57:35 -- host/async_init.sh@20 -- # uuidgen 00:27:33.608 13:57:35 -- host/async_init.sh@20 -- # tr -d - 00:27:33.608 13:57:35 -- host/async_init.sh@20 -- # nguid=fe38022e025844a0bf35a8d81800c858 00:27:33.608 13:57:35 -- host/async_init.sh@22 -- # nvmftestinit 00:27:33.608 13:57:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:33.608 13:57:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.608 13:57:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:33.608 13:57:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:33.608 13:57:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:33.608 13:57:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.608 13:57:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.608 13:57:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.608 13:57:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:33.608 13:57:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:33.608 13:57:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:33.608 13:57:35 -- common/autotest_common.sh@10 -- # set +x 00:27:38.883 13:57:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:38.883 13:57:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:38.883 13:57:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:38.883 13:57:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:38.883 13:57:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:38.883 13:57:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:38.883 13:57:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:38.883 13:57:40 -- nvmf/common.sh@294 -- # net_devs=() 00:27:38.883 13:57:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:38.883 13:57:40 -- nvmf/common.sh@295 -- # e810=() 00:27:38.883 13:57:40 -- nvmf/common.sh@295 -- # local -ga e810 00:27:38.883 13:57:40 -- nvmf/common.sh@296 -- # x722=() 00:27:38.883 13:57:40 -- nvmf/common.sh@296 -- # local -ga x722 00:27:38.883 13:57:40 -- nvmf/common.sh@297 -- # mlx=() 00:27:38.883 13:57:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:38.883 13:57:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.883 13:57:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.883 13:57:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.883 13:57:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.883 13:57:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.883 13:57:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.883 13:57:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.883 13:57:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.884 13:57:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.884 13:57:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.884 13:57:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.884 13:57:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:38.884 13:57:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:38.884 13:57:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:38.884 13:57:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:38.884 13:57:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:38.884 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:38.884 13:57:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:38.884 13:57:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:38.884 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:38.884 13:57:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:38.884 13:57:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:38.884 13:57:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.884 13:57:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:38.884 13:57:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.884 13:57:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:38.884 Found net devices under 0000:86:00.0: cvl_0_0 00:27:38.884 13:57:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.884 13:57:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:38.884 13:57:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.884 13:57:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:38.884 13:57:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.884 13:57:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:38.884 Found net devices under 0000:86:00.1: cvl_0_1 00:27:38.884 13:57:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.884 13:57:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:38.884 13:57:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:38.884 13:57:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:38.884 13:57:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.884 13:57:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.884 13:57:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.884 13:57:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:38.884 13:57:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.884 13:57:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.884 13:57:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:38.884 13:57:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.884 13:57:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.884 13:57:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:38.884 13:57:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:38.884 13:57:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.884 13:57:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.884 13:57:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.884 13:57:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.884 13:57:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:38.884 13:57:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.884 13:57:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.884 13:57:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.884 13:57:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:38.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:27:38.884 00:27:38.884 --- 10.0.0.2 ping statistics --- 00:27:38.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.884 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:27:38.884 13:57:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:27:38.884 00:27:38.884 --- 10.0.0.1 ping statistics --- 00:27:38.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.884 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:27:38.884 13:57:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.884 13:57:40 -- nvmf/common.sh@410 -- # return 0 00:27:38.884 13:57:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:38.884 13:57:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.884 13:57:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:38.884 13:57:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.884 13:57:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:38.884 13:57:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:38.884 13:57:40 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:38.884 13:57:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:38.884 13:57:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:38.884 13:57:40 -- common/autotest_common.sh@10 -- # set +x 00:27:38.884 13:57:40 -- nvmf/common.sh@469 -- # nvmfpid=1720769 00:27:38.884 13:57:40 -- nvmf/common.sh@470 -- # waitforlisten 1720769 00:27:38.884 13:57:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:38.884 13:57:40 -- common/autotest_common.sh@819 -- # '[' -z 1720769 ']' 00:27:38.884 13:57:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.884 13:57:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:38.884 13:57:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.884 13:57:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:38.884 13:57:40 -- common/autotest_common.sh@10 -- # set +x 00:27:38.884 [2024-07-11 13:57:41.028389] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:38.884 [2024-07-11 13:57:41.028434] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.884 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.884 [2024-07-11 13:57:41.087679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.884 [2024-07-11 13:57:41.125025] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:38.884 [2024-07-11 13:57:41.125155] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.884 [2024-07-11 13:57:41.125172] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.884 [2024-07-11 13:57:41.125178] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.884 [2024-07-11 13:57:41.125201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.451 13:57:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:39.451 13:57:41 -- common/autotest_common.sh@852 -- # return 0 00:27:39.451 13:57:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:39.451 13:57:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:39.451 13:57:41 -- common/autotest_common.sh@10 -- # set +x 00:27:39.451 13:57:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.451 13:57:41 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:39.451 13:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.451 13:57:41 -- common/autotest_common.sh@10 -- # set +x 00:27:39.451 [2024-07-11 13:57:41.861964] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.451 13:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.451 13:57:41 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:39.451 13:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.451 13:57:41 -- common/autotest_common.sh@10 -- # set +x 00:27:39.451 null0 00:27:39.451 13:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.451 13:57:41 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:39.451 13:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.451 13:57:41 -- common/autotest_common.sh@10 -- # set +x 00:27:39.451 13:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.451 13:57:41 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:39.451 13:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.451 13:57:41 -- common/autotest_common.sh@10 -- # set +x 00:27:39.451 13:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.451 13:57:41 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fe38022e025844a0bf35a8d81800c858 00:27:39.451 13:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.451 13:57:41 -- common/autotest_common.sh@10 -- # set +x 00:27:39.451 13:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.452 13:57:41 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:39.452 13:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.452 13:57:41 -- common/autotest_common.sh@10 -- # set +x 00:27:39.452 [2024-07-11 13:57:41.902202] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.452 13:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.452 13:57:41 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:39.452 13:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.452 13:57:41 -- common/autotest_common.sh@10 -- # set +x 00:27:39.711 nvme0n1 00:27:39.711 13:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.711 13:57:42 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:39.711 13:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.711 13:57:42 -- common/autotest_common.sh@10 -- # set +x 00:27:39.711 [ 00:27:39.711 { 00:27:39.711 "name": "nvme0n1", 00:27:39.711 "aliases": [ 00:27:39.711 "fe38022e-0258-44a0-bf35-a8d81800c858" 00:27:39.711 ], 00:27:39.711 "product_name": "NVMe disk", 00:27:39.711 "block_size": 512, 00:27:39.711 "num_blocks": 2097152, 00:27:39.711 "uuid": "fe38022e-0258-44a0-bf35-a8d81800c858", 00:27:39.711 "assigned_rate_limits": { 00:27:39.711 "rw_ios_per_sec": 0, 00:27:39.711 "rw_mbytes_per_sec": 0, 00:27:39.711 "r_mbytes_per_sec": 0, 00:27:39.711 "w_mbytes_per_sec": 0 00:27:39.711 }, 00:27:39.711 "claimed": false, 00:27:39.711 "zoned": false, 00:27:39.711 "supported_io_types": { 00:27:39.711 "read": true, 00:27:39.711 "write": true, 00:27:39.711 "unmap": false, 00:27:39.711 "write_zeroes": true, 00:27:39.711 "flush": true, 00:27:39.711 "reset": true, 00:27:39.711 "compare": true, 00:27:39.711 "compare_and_write": true, 00:27:39.711 "abort": true, 00:27:39.711 "nvme_admin": true, 00:27:39.711 "nvme_io": true 00:27:39.711 }, 00:27:39.711 "driver_specific": { 00:27:39.711 "nvme": [ 00:27:39.711 { 00:27:39.711 "trid": { 00:27:39.711 "trtype": "TCP", 00:27:39.711 "adrfam": "IPv4", 00:27:39.711 "traddr": "10.0.0.2", 00:27:39.711 "trsvcid": "4420", 00:27:39.711 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:39.711 }, 00:27:39.711 "ctrlr_data": { 00:27:39.711 "cntlid": 1, 00:27:39.711 "vendor_id": "0x8086", 00:27:39.711 "model_number": "SPDK bdev Controller", 00:27:39.711 "serial_number": "00000000000000000000", 00:27:39.711 "firmware_revision": "24.01.1", 00:27:39.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.711 "oacs": { 00:27:39.711 "security": 0, 00:27:39.711 "format": 0, 00:27:39.711 "firmware": 0, 00:27:39.711 "ns_manage": 0 00:27:39.711 }, 00:27:39.711 "multi_ctrlr": true, 00:27:39.711 "ana_reporting": false 00:27:39.711 }, 00:27:39.711 "vs": { 00:27:39.711 "nvme_version": "1.3" 00:27:39.711 }, 00:27:39.711 "ns_data": { 00:27:39.711 "id": 1, 00:27:39.711 "can_share": true 00:27:39.711 } 00:27:39.711 } 00:27:39.711 ], 00:27:39.711 "mp_policy": "active_passive" 00:27:39.711 } 00:27:39.711 } 00:27:39.711 ] 00:27:39.711 13:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.711 13:57:42 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:39.711 13:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.711 13:57:42 -- common/autotest_common.sh@10 -- # set +x 00:27:39.711 [2024-07-11 13:57:42.150726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:39.711 [2024-07-11 13:57:42.150779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266bf00 (9): Bad file descriptor 00:27:39.970 [2024-07-11 13:57:42.282234] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:39.970 13:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.970 13:57:42 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:39.970 13:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.970 13:57:42 -- common/autotest_common.sh@10 -- # set +x 00:27:39.970 [ 00:27:39.970 { 00:27:39.970 "name": "nvme0n1", 00:27:39.970 "aliases": [ 00:27:39.970 "fe38022e-0258-44a0-bf35-a8d81800c858" 00:27:39.970 ], 00:27:39.970 "product_name": "NVMe disk", 00:27:39.970 "block_size": 512, 00:27:39.970 "num_blocks": 2097152, 00:27:39.970 "uuid": "fe38022e-0258-44a0-bf35-a8d81800c858", 00:27:39.970 "assigned_rate_limits": { 00:27:39.970 "rw_ios_per_sec": 0, 00:27:39.970 "rw_mbytes_per_sec": 0, 00:27:39.970 "r_mbytes_per_sec": 0, 00:27:39.970 "w_mbytes_per_sec": 0 00:27:39.970 }, 00:27:39.970 "claimed": false, 00:27:39.970 "zoned": false, 00:27:39.970 "supported_io_types": { 00:27:39.970 "read": true, 00:27:39.970 "write": true, 00:27:39.970 "unmap": false, 00:27:39.970 "write_zeroes": true, 00:27:39.970 "flush": true, 00:27:39.970 "reset": true, 00:27:39.970 "compare": true, 00:27:39.970 "compare_and_write": true, 00:27:39.970 "abort": true, 00:27:39.970 "nvme_admin": true, 00:27:39.970 "nvme_io": true 00:27:39.970 }, 00:27:39.970 "driver_specific": { 00:27:39.970 "nvme": [ 00:27:39.970 { 00:27:39.970 "trid": { 00:27:39.970 "trtype": "TCP", 00:27:39.970 "adrfam": "IPv4", 00:27:39.970 "traddr": "10.0.0.2", 00:27:39.970 "trsvcid": "4420", 00:27:39.970 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:39.970 }, 00:27:39.970 "ctrlr_data": { 00:27:39.970 "cntlid": 2, 00:27:39.970 "vendor_id": "0x8086", 00:27:39.970 "model_number": "SPDK bdev Controller", 00:27:39.970 "serial_number": "00000000000000000000", 00:27:39.970 "firmware_revision": "24.01.1", 00:27:39.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.970 "oacs": { 00:27:39.970 "security": 0, 00:27:39.970 "format": 0, 00:27:39.970 "firmware": 0, 00:27:39.970 "ns_manage": 0 00:27:39.970 }, 00:27:39.970 "multi_ctrlr": true, 00:27:39.970 "ana_reporting": false 00:27:39.970 }, 00:27:39.970 "vs": { 00:27:39.970 "nvme_version": "1.3" 00:27:39.970 }, 00:27:39.970 "ns_data": { 00:27:39.970 "id": 1, 00:27:39.970 "can_share": true 00:27:39.970 } 00:27:39.970 } 00:27:39.970 ], 00:27:39.970 "mp_policy": "active_passive" 00:27:39.970 } 00:27:39.970 } 00:27:39.970 ] 00:27:39.970 13:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.970 13:57:42 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.970 13:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.970 13:57:42 -- common/autotest_common.sh@10 -- # set +x 00:27:39.970 13:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.970 13:57:42 -- host/async_init.sh@53 -- # mktemp 00:27:39.970 13:57:42 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ntX9pn3KbM 00:27:39.970 13:57:42 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:39.970 13:57:42 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ntX9pn3KbM 00:27:39.970 13:57:42 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:39.970 13:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.970 13:57:42 -- common/autotest_common.sh@10 -- # set +x 00:27:39.970 13:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.970 13:57:42 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:39.971 13:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.971 13:57:42 -- common/autotest_common.sh@10 -- # set +x 00:27:39.971 [2024-07-11 13:57:42.331286] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:39.971 [2024-07-11 13:57:42.331393] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:39.971 13:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.971 13:57:42 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ntX9pn3KbM 00:27:39.971 13:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.971 13:57:42 -- common/autotest_common.sh@10 -- # set +x 00:27:39.971 13:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.971 13:57:42 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ntX9pn3KbM 00:27:39.971 13:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.971 13:57:42 -- common/autotest_common.sh@10 -- # set +x 00:27:39.971 [2024-07-11 13:57:42.347324] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:39.971 nvme0n1 00:27:39.971 13:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.971 13:57:42 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:39.971 13:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.971 13:57:42 -- common/autotest_common.sh@10 -- # set +x 00:27:39.971 [ 00:27:39.971 { 00:27:39.971 "name": "nvme0n1", 00:27:39.971 "aliases": [ 00:27:39.971 "fe38022e-0258-44a0-bf35-a8d81800c858" 00:27:39.971 ], 00:27:39.971 "product_name": "NVMe disk", 00:27:39.971 "block_size": 512, 00:27:39.971 "num_blocks": 2097152, 00:27:39.971 "uuid": "fe38022e-0258-44a0-bf35-a8d81800c858", 00:27:39.971 "assigned_rate_limits": { 00:27:39.971 "rw_ios_per_sec": 0, 00:27:39.971 "rw_mbytes_per_sec": 0, 00:27:39.971 "r_mbytes_per_sec": 0, 00:27:39.971 "w_mbytes_per_sec": 0 00:27:39.971 }, 00:27:39.971 "claimed": false, 00:27:39.971 "zoned": false, 00:27:39.971 "supported_io_types": { 00:27:39.971 "read": true, 00:27:39.971 "write": true, 00:27:40.230 "unmap": false, 00:27:40.230 "write_zeroes": true, 00:27:40.230 "flush": true, 00:27:40.230 "reset": true, 00:27:40.230 "compare": true, 00:27:40.230 "compare_and_write": true, 00:27:40.230 "abort": true, 00:27:40.230 "nvme_admin": true, 00:27:40.230 "nvme_io": true 00:27:40.230 }, 00:27:40.230 "driver_specific": { 00:27:40.230 "nvme": [ 00:27:40.230 { 00:27:40.230 "trid": { 00:27:40.230 "trtype": "TCP", 00:27:40.230 "adrfam": "IPv4", 00:27:40.230 "traddr": "10.0.0.2", 00:27:40.230 "trsvcid": "4421", 00:27:40.230 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:40.230 }, 00:27:40.230 "ctrlr_data": { 00:27:40.230 "cntlid": 3, 00:27:40.230 "vendor_id": "0x8086", 00:27:40.230 "model_number": "SPDK bdev Controller", 00:27:40.230 "serial_number": "00000000000000000000", 00:27:40.230 "firmware_revision": "24.01.1", 00:27:40.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.230 "oacs": { 00:27:40.230 "security": 0, 00:27:40.230 "format": 0, 00:27:40.230 "firmware": 0, 00:27:40.230 "ns_manage": 0 00:27:40.230 }, 00:27:40.230 "multi_ctrlr": true, 00:27:40.230 "ana_reporting": false 00:27:40.230 }, 00:27:40.230 "vs": { 00:27:40.230 "nvme_version": "1.3" 00:27:40.230 }, 00:27:40.230 "ns_data": { 00:27:40.230 "id": 1, 00:27:40.230 "can_share": true 00:27:40.230 } 00:27:40.230 } 00:27:40.230 ], 00:27:40.230 "mp_policy": "active_passive" 00:27:40.230 } 00:27:40.230 } 00:27:40.230 ] 00:27:40.230 13:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:40.230 13:57:42 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.230 13:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:40.230 13:57:42 -- common/autotest_common.sh@10 -- # set +x 00:27:40.230 13:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:40.230 13:57:42 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ntX9pn3KbM 00:27:40.230 13:57:42 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:40.230 13:57:42 -- host/async_init.sh@78 -- # nvmftestfini 00:27:40.230 13:57:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:40.230 13:57:42 -- nvmf/common.sh@116 -- # sync 00:27:40.230 13:57:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:40.230 13:57:42 -- nvmf/common.sh@119 -- # set +e 00:27:40.230 13:57:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:40.230 13:57:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:40.230 rmmod nvme_tcp 00:27:40.230 rmmod nvme_fabrics 00:27:40.230 rmmod nvme_keyring 00:27:40.230 13:57:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:40.230 13:57:42 -- nvmf/common.sh@123 -- # set -e 00:27:40.230 13:57:42 -- nvmf/common.sh@124 -- # return 0 00:27:40.230 13:57:42 -- nvmf/common.sh@477 -- # '[' -n 1720769 ']' 00:27:40.230 13:57:42 -- nvmf/common.sh@478 -- # killprocess 1720769 00:27:40.231 13:57:42 -- common/autotest_common.sh@926 -- # '[' -z 1720769 ']' 00:27:40.231 13:57:42 -- common/autotest_common.sh@930 -- # kill -0 1720769 00:27:40.231 13:57:42 -- common/autotest_common.sh@931 -- # uname 00:27:40.231 13:57:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:40.231 13:57:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1720769 00:27:40.231 13:57:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:40.231 13:57:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:40.231 13:57:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1720769' 00:27:40.231 killing process with pid 1720769 00:27:40.231 13:57:42 -- common/autotest_common.sh@945 -- # kill 1720769 00:27:40.231 13:57:42 -- common/autotest_common.sh@950 -- # wait 1720769 00:27:40.490 13:57:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:40.490 13:57:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:40.490 13:57:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:40.490 13:57:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:40.490 13:57:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:40.490 13:57:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.490 13:57:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.490 13:57:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.395 13:57:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:42.395 00:27:42.395 real 0m9.236s 00:27:42.395 user 0m3.374s 00:27:42.395 sys 0m4.339s 00:27:42.395 13:57:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.395 13:57:44 -- common/autotest_common.sh@10 -- # set +x 00:27:42.395 ************************************ 00:27:42.395 END TEST nvmf_async_init 00:27:42.395 ************************************ 00:27:42.395 13:57:44 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:42.395 13:57:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:42.395 13:57:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.395 13:57:44 -- common/autotest_common.sh@10 -- # set +x 00:27:42.395 ************************************ 00:27:42.395 START TEST dma 00:27:42.395 ************************************ 00:27:42.395 13:57:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:42.656 * Looking for test storage... 00:27:42.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:42.656 13:57:44 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.656 13:57:44 -- nvmf/common.sh@7 -- # uname -s 00:27:42.656 13:57:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.656 13:57:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.656 13:57:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.656 13:57:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.656 13:57:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.656 13:57:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.656 13:57:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.656 13:57:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.656 13:57:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.656 13:57:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.656 13:57:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:42.656 13:57:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:42.656 13:57:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.656 13:57:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.656 13:57:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.656 13:57:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.656 13:57:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.656 13:57:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.656 13:57:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.656 13:57:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.656 13:57:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.656 13:57:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.656 13:57:44 -- paths/export.sh@5 -- # export PATH 00:27:42.656 13:57:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.656 13:57:44 -- nvmf/common.sh@46 -- # : 0 00:27:42.656 13:57:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:42.656 13:57:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:42.656 13:57:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:42.656 13:57:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.656 13:57:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.656 13:57:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:42.656 13:57:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:42.656 13:57:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:42.656 13:57:44 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:42.656 13:57:44 -- host/dma.sh@13 -- # exit 0 00:27:42.656 00:27:42.656 real 0m0.108s 00:27:42.656 user 0m0.058s 00:27:42.656 sys 0m0.058s 00:27:42.656 13:57:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.656 13:57:44 -- common/autotest_common.sh@10 -- # set +x 00:27:42.656 ************************************ 00:27:42.656 END TEST dma 00:27:42.656 ************************************ 00:27:42.656 13:57:44 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:42.656 13:57:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:42.656 13:57:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.656 13:57:44 -- common/autotest_common.sh@10 -- # set +x 00:27:42.656 ************************************ 00:27:42.656 START TEST nvmf_identify 00:27:42.656 ************************************ 00:27:42.656 13:57:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:42.656 * Looking for test storage... 00:27:42.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:42.656 13:57:45 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.656 13:57:45 -- nvmf/common.sh@7 -- # uname -s 00:27:42.656 13:57:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.656 13:57:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.656 13:57:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.656 13:57:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.656 13:57:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.656 13:57:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.656 13:57:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.656 13:57:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.656 13:57:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.656 13:57:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.656 13:57:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:42.656 13:57:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:42.656 13:57:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.656 13:57:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.656 13:57:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.656 13:57:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.656 13:57:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.656 13:57:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.656 13:57:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.656 13:57:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.656 13:57:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.656 13:57:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.656 13:57:45 -- paths/export.sh@5 -- # export PATH 00:27:42.657 13:57:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.657 13:57:45 -- nvmf/common.sh@46 -- # : 0 00:27:42.657 13:57:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:42.657 13:57:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:42.657 13:57:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:42.657 13:57:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.657 13:57:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.657 13:57:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:42.657 13:57:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:42.657 13:57:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:42.657 13:57:45 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:42.657 13:57:45 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:42.657 13:57:45 -- host/identify.sh@14 -- # nvmftestinit 00:27:42.657 13:57:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:42.657 13:57:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.657 13:57:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:42.657 13:57:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:42.657 13:57:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:42.657 13:57:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.657 13:57:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.657 13:57:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.657 13:57:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:42.657 13:57:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:42.657 13:57:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:42.657 13:57:45 -- common/autotest_common.sh@10 -- # set +x 00:27:47.931 13:57:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:47.931 13:57:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:47.931 13:57:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:47.931 13:57:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:47.932 13:57:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:47.932 13:57:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:47.932 13:57:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:47.932 13:57:49 -- nvmf/common.sh@294 -- # net_devs=() 00:27:47.932 13:57:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:47.932 13:57:49 -- nvmf/common.sh@295 -- # e810=() 00:27:47.932 13:57:49 -- nvmf/common.sh@295 -- # local -ga e810 00:27:47.932 13:57:49 -- nvmf/common.sh@296 -- # x722=() 00:27:47.932 13:57:49 -- nvmf/common.sh@296 -- # local -ga x722 00:27:47.932 13:57:49 -- nvmf/common.sh@297 -- # mlx=() 00:27:47.932 13:57:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:47.932 13:57:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.932 13:57:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.932 13:57:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.932 13:57:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.932 13:57:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.932 13:57:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.932 13:57:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.932 13:57:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.932 13:57:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.932 13:57:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.932 13:57:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.932 13:57:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:47.932 13:57:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:47.932 13:57:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:47.932 13:57:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:47.932 13:57:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:47.932 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:47.932 13:57:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:47.932 13:57:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:47.932 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:47.932 13:57:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:47.932 13:57:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:47.932 13:57:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.932 13:57:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:47.932 13:57:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.932 13:57:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:47.932 Found net devices under 0000:86:00.0: cvl_0_0 00:27:47.932 13:57:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.932 13:57:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:47.932 13:57:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.932 13:57:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:47.932 13:57:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.932 13:57:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:47.932 Found net devices under 0000:86:00.1: cvl_0_1 00:27:47.932 13:57:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.932 13:57:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:47.932 13:57:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:47.932 13:57:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:47.932 13:57:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:47.932 13:57:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.932 13:57:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.932 13:57:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.932 13:57:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:47.932 13:57:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.932 13:57:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.932 13:57:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:47.932 13:57:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.932 13:57:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.932 13:57:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:47.932 13:57:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:47.932 13:57:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.932 13:57:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.932 13:57:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.932 13:57:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.932 13:57:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:47.932 13:57:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.932 13:57:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.932 13:57:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.932 13:57:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:47.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:27:47.932 00:27:47.932 --- 10.0.0.2 ping statistics --- 00:27:47.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.932 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:27:47.932 13:57:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:27:47.932 00:27:47.932 --- 10.0.0.1 ping statistics --- 00:27:47.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.932 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:27:47.932 13:57:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.932 13:57:50 -- nvmf/common.sh@410 -- # return 0 00:27:47.932 13:57:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:47.932 13:57:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.932 13:57:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:47.932 13:57:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:47.932 13:57:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.932 13:57:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:47.932 13:57:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:47.932 13:57:50 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:47.932 13:57:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:47.932 13:57:50 -- common/autotest_common.sh@10 -- # set +x 00:27:47.932 13:57:50 -- host/identify.sh@19 -- # nvmfpid=1724484 00:27:47.932 13:57:50 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:47.932 13:57:50 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:47.932 13:57:50 -- host/identify.sh@23 -- # waitforlisten 1724484 00:27:47.932 13:57:50 -- common/autotest_common.sh@819 -- # '[' -z 1724484 ']' 00:27:47.932 13:57:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.932 13:57:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:47.932 13:57:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.932 13:57:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:47.932 13:57:50 -- common/autotest_common.sh@10 -- # set +x 00:27:47.932 [2024-07-11 13:57:50.247031] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:47.932 [2024-07-11 13:57:50.247070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.932 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.932 [2024-07-11 13:57:50.306594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.932 [2024-07-11 13:57:50.345875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:47.932 [2024-07-11 13:57:50.346003] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.932 [2024-07-11 13:57:50.346016] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.932 [2024-07-11 13:57:50.346023] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.932 [2024-07-11 13:57:50.346068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.932 [2024-07-11 13:57:50.346188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.932 [2024-07-11 13:57:50.346229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.932 [2024-07-11 13:57:50.346232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.875 13:57:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:48.875 13:57:51 -- common/autotest_common.sh@852 -- # return 0 00:27:48.875 13:57:51 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.875 13:57:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.875 13:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:48.875 [2024-07-11 13:57:51.052510] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.875 13:57:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.875 13:57:51 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:48.875 13:57:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:48.876 13:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:48.876 13:57:51 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:48.876 13:57:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.876 13:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:48.876 Malloc0 00:27:48.876 13:57:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.876 13:57:51 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:48.876 13:57:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.876 13:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:48.876 13:57:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.876 13:57:51 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:48.876 13:57:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.876 13:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:48.876 13:57:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.876 13:57:51 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.876 13:57:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.876 13:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:48.876 [2024-07-11 13:57:51.140562] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.876 13:57:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.876 13:57:51 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:48.876 13:57:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.876 13:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:48.876 13:57:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.876 13:57:51 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:48.876 13:57:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.876 13:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:48.876 [2024-07-11 13:57:51.156392] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:48.876 [ 00:27:48.876 { 00:27:48.876 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:48.876 "subtype": "Discovery", 00:27:48.876 "listen_addresses": [ 00:27:48.876 { 00:27:48.876 "transport": "TCP", 00:27:48.876 "trtype": "TCP", 00:27:48.876 "adrfam": "IPv4", 00:27:48.876 "traddr": "10.0.0.2", 00:27:48.876 "trsvcid": "4420" 00:27:48.876 } 00:27:48.876 ], 00:27:48.876 "allow_any_host": true, 00:27:48.876 "hosts": [] 00:27:48.876 }, 00:27:48.876 { 00:27:48.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:48.876 "subtype": "NVMe", 00:27:48.876 "listen_addresses": [ 00:27:48.876 { 00:27:48.876 "transport": "TCP", 00:27:48.876 "trtype": "TCP", 00:27:48.876 "adrfam": "IPv4", 00:27:48.876 "traddr": "10.0.0.2", 00:27:48.876 "trsvcid": "4420" 00:27:48.876 } 00:27:48.876 ], 00:27:48.876 "allow_any_host": true, 00:27:48.876 "hosts": [], 00:27:48.876 "serial_number": "SPDK00000000000001", 00:27:48.876 "model_number": "SPDK bdev Controller", 00:27:48.876 "max_namespaces": 32, 00:27:48.876 "min_cntlid": 1, 00:27:48.876 "max_cntlid": 65519, 00:27:48.876 "namespaces": [ 00:27:48.876 { 00:27:48.876 "nsid": 1, 00:27:48.876 "bdev_name": "Malloc0", 00:27:48.876 "name": "Malloc0", 00:27:48.876 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:48.876 "eui64": "ABCDEF0123456789", 00:27:48.876 "uuid": "c2370df6-85f3-457f-8961-ec6c159f361b" 00:27:48.876 } 00:27:48.876 ] 00:27:48.876 } 00:27:48.876 ] 00:27:48.876 13:57:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.876 13:57:51 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:48.876 [2024-07-11 13:57:51.190176] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:48.876 [2024-07-11 13:57:51.190222] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724633 ] 00:27:48.876 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.876 [2024-07-11 13:57:51.218688] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:48.876 [2024-07-11 13:57:51.218732] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:48.876 [2024-07-11 13:57:51.218736] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:48.876 [2024-07-11 13:57:51.218746] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:48.876 [2024-07-11 13:57:51.218753] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:48.876 [2024-07-11 13:57:51.219116] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:48.876 [2024-07-11 13:57:51.219147] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2428470 0 00:27:48.876 [2024-07-11 13:57:51.233169] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:48.876 [2024-07-11 13:57:51.233183] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:48.876 [2024-07-11 13:57:51.233187] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:48.876 [2024-07-11 13:57:51.233190] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:48.876 [2024-07-11 13:57:51.233223] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.876 [2024-07-11 13:57:51.233228] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.876 [2024-07-11 13:57:51.233232] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2428470) 00:27:48.876 [2024-07-11 13:57:51.233243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:48.876 [2024-07-11 13:57:51.233259] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491240, cid 0, qid 0 00:27:48.876 [2024-07-11 13:57:51.240169] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.876 [2024-07-11 13:57:51.240177] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.876 [2024-07-11 13:57:51.240180] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.876 [2024-07-11 13:57:51.240184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491240) on tqpair=0x2428470 00:27:48.876 [2024-07-11 13:57:51.240194] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:48.876 [2024-07-11 13:57:51.240199] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:48.876 [2024-07-11 13:57:51.240204] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:48.876 [2024-07-11 13:57:51.240215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.876 [2024-07-11 13:57:51.240218] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.876 [2024-07-11 13:57:51.240221] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2428470) 00:27:48.876 [2024-07-11 13:57:51.240227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.877 [2024-07-11 13:57:51.240240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491240, cid 0, qid 0 00:27:48.877 [2024-07-11 13:57:51.240423] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.877 [2024-07-11 13:57:51.240430] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.877 [2024-07-11 13:57:51.240433] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.240436] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491240) on tqpair=0x2428470 00:27:48.877 [2024-07-11 13:57:51.240442] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:48.877 [2024-07-11 13:57:51.240448] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:48.877 [2024-07-11 13:57:51.240454] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.240457] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.240460] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2428470) 00:27:48.877 [2024-07-11 13:57:51.240466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.877 [2024-07-11 13:57:51.240477] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491240, cid 0, qid 0 00:27:48.877 [2024-07-11 13:57:51.240552] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.877 [2024-07-11 13:57:51.240557] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.877 [2024-07-11 13:57:51.240560] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.240564] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491240) on tqpair=0x2428470 00:27:48.877 [2024-07-11 13:57:51.240568] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:48.877 [2024-07-11 13:57:51.240575] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:48.877 [2024-07-11 13:57:51.240581] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.240584] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.240587] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2428470) 00:27:48.877 [2024-07-11 13:57:51.240593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.877 [2024-07-11 13:57:51.240602] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491240, cid 0, qid 0 00:27:48.877 [2024-07-11 13:57:51.240681] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.877 [2024-07-11 13:57:51.240687] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.877 [2024-07-11 13:57:51.240690] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.240693] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491240) on tqpair=0x2428470 00:27:48.877 [2024-07-11 13:57:51.240697] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:48.877 [2024-07-11 13:57:51.240705] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.240709] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.240712] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2428470) 00:27:48.877 [2024-07-11 13:57:51.240718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.877 [2024-07-11 13:57:51.240726] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491240, cid 0, qid 0 00:27:48.877 [2024-07-11 13:57:51.240798] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.877 [2024-07-11 13:57:51.240804] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.877 [2024-07-11 13:57:51.240807] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.240812] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491240) on tqpair=0x2428470 00:27:48.877 [2024-07-11 13:57:51.240817] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:48.877 [2024-07-11 13:57:51.240821] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:48.877 [2024-07-11 13:57:51.240827] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:48.877 [2024-07-11 13:57:51.240932] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:48.877 [2024-07-11 13:57:51.240936] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:48.877 [2024-07-11 13:57:51.240943] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.240946] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.240949] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2428470) 00:27:48.877 [2024-07-11 13:57:51.240955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.877 [2024-07-11 13:57:51.240964] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491240, cid 0, qid 0 00:27:48.877 [2024-07-11 13:57:51.241059] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.877 [2024-07-11 13:57:51.241065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.877 [2024-07-11 13:57:51.241068] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.241071] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491240) on tqpair=0x2428470 00:27:48.877 [2024-07-11 13:57:51.241076] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:48.877 [2024-07-11 13:57:51.241083] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.241087] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.241090] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2428470) 00:27:48.877 [2024-07-11 13:57:51.241096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.877 [2024-07-11 13:57:51.241105] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491240, cid 0, qid 0 00:27:48.877 [2024-07-11 13:57:51.241219] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.877 [2024-07-11 13:57:51.241225] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.877 [2024-07-11 13:57:51.241228] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.241232] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491240) on tqpair=0x2428470 00:27:48.877 [2024-07-11 13:57:51.241236] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:48.877 [2024-07-11 13:57:51.241240] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:48.877 [2024-07-11 13:57:51.241248] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:48.877 [2024-07-11 13:57:51.241255] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:48.877 [2024-07-11 13:57:51.241263] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.241266] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.877 [2024-07-11 13:57:51.241271] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2428470) 00:27:48.877 [2024-07-11 13:57:51.241277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.878 [2024-07-11 13:57:51.241288] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491240, cid 0, qid 0 00:27:48.878 [2024-07-11 13:57:51.241391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:48.878 [2024-07-11 13:57:51.241397] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:48.878 [2024-07-11 13:57:51.241400] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241404] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2428470): datao=0, datal=4096, cccid=0 00:27:48.878 [2024-07-11 13:57:51.241408] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2491240) on tqpair(0x2428470): expected_datao=0, payload_size=4096 00:27:48.878 [2024-07-11 13:57:51.241414] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241418] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241450] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.878 [2024-07-11 13:57:51.241456] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.878 [2024-07-11 13:57:51.241459] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241462] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491240) on tqpair=0x2428470 00:27:48.878 [2024-07-11 13:57:51.241469] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:48.878 [2024-07-11 13:57:51.241473] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:48.878 [2024-07-11 13:57:51.241476] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:48.878 [2024-07-11 13:57:51.241481] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:48.878 [2024-07-11 13:57:51.241485] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:48.878 [2024-07-11 13:57:51.241488] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:48.878 [2024-07-11 13:57:51.241499] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:48.878 [2024-07-11 13:57:51.241506] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241509] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241512] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2428470) 00:27:48.878 [2024-07-11 13:57:51.241518] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:48.878 [2024-07-11 13:57:51.241528] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491240, cid 0, qid 0 00:27:48.878 [2024-07-11 13:57:51.241609] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.878 [2024-07-11 13:57:51.241615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.878 [2024-07-11 13:57:51.241618] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241621] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491240) on tqpair=0x2428470 00:27:48.878 [2024-07-11 13:57:51.241628] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241631] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241634] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2428470) 00:27:48.878 [2024-07-11 13:57:51.241639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.878 [2024-07-11 13:57:51.241646] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241650] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241653] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2428470) 00:27:48.878 [2024-07-11 13:57:51.241657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.878 [2024-07-11 13:57:51.241662] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241665] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241668] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2428470) 00:27:48.878 [2024-07-11 13:57:51.241673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.878 [2024-07-11 13:57:51.241678] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241681] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241684] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.878 [2024-07-11 13:57:51.241688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.878 [2024-07-11 13:57:51.241692] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:48.878 [2024-07-11 13:57:51.241703] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:48.878 [2024-07-11 13:57:51.241708] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241711] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241714] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2428470) 00:27:48.878 [2024-07-11 13:57:51.241720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.878 [2024-07-11 13:57:51.241730] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491240, cid 0, qid 0 00:27:48.878 [2024-07-11 13:57:51.241735] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24913a0, cid 1, qid 0 00:27:48.878 [2024-07-11 13:57:51.241739] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491500, cid 2, qid 0 00:27:48.878 [2024-07-11 13:57:51.241743] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.878 [2024-07-11 13:57:51.241747] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24917c0, cid 4, qid 0 00:27:48.878 [2024-07-11 13:57:51.241859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.878 [2024-07-11 13:57:51.241865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.878 [2024-07-11 13:57:51.241868] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241871] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24917c0) on tqpair=0x2428470 00:27:48.878 [2024-07-11 13:57:51.241876] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:48.878 [2024-07-11 13:57:51.241880] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:48.878 [2024-07-11 13:57:51.241888] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241892] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.241895] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2428470) 00:27:48.878 [2024-07-11 13:57:51.241900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.878 [2024-07-11 13:57:51.241912] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24917c0, cid 4, qid 0 00:27:48.878 [2024-07-11 13:57:51.242003] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:48.878 [2024-07-11 13:57:51.242009] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:48.878 [2024-07-11 13:57:51.242012] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:48.878 [2024-07-11 13:57:51.242015] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2428470): datao=0, datal=4096, cccid=4 00:27:48.878 [2024-07-11 13:57:51.242019] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24917c0) on tqpair(0x2428470): expected_datao=0, payload_size=4096 00:27:48.879 [2024-07-11 13:57:51.242025] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.242028] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.242052] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.879 [2024-07-11 13:57:51.242057] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.879 [2024-07-11 13:57:51.242060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.242063] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24917c0) on tqpair=0x2428470 00:27:48.879 [2024-07-11 13:57:51.242073] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:48.879 [2024-07-11 13:57:51.242093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.242097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.242100] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2428470) 00:27:48.879 [2024-07-11 13:57:51.242106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.879 [2024-07-11 13:57:51.242111] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.242114] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.242117] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2428470) 00:27:48.879 [2024-07-11 13:57:51.242122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.879 [2024-07-11 13:57:51.242135] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24917c0, cid 4, qid 0 00:27:48.879 [2024-07-11 13:57:51.242139] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491920, cid 5, qid 0 00:27:48.879 [2024-07-11 13:57:51.242256] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:48.879 [2024-07-11 13:57:51.242262] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:48.879 [2024-07-11 13:57:51.242265] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.242268] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2428470): datao=0, datal=1024, cccid=4 00:27:48.879 [2024-07-11 13:57:51.242272] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24917c0) on tqpair(0x2428470): expected_datao=0, payload_size=1024 00:27:48.879 [2024-07-11 13:57:51.242278] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.242281] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.242285] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.879 [2024-07-11 13:57:51.242290] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.879 [2024-07-11 13:57:51.242293] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.242296] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491920) on tqpair=0x2428470 00:27:48.879 [2024-07-11 13:57:51.283348] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.879 [2024-07-11 13:57:51.283362] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.879 [2024-07-11 13:57:51.283366] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.283369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24917c0) on tqpair=0x2428470 00:27:48.879 [2024-07-11 13:57:51.283380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.283383] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.283387] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2428470) 00:27:48.879 [2024-07-11 13:57:51.283393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.879 [2024-07-11 13:57:51.283409] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24917c0, cid 4, qid 0 00:27:48.879 [2024-07-11 13:57:51.283492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:48.879 [2024-07-11 13:57:51.283498] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:48.879 [2024-07-11 13:57:51.283501] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.283504] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2428470): datao=0, datal=3072, cccid=4 00:27:48.879 [2024-07-11 13:57:51.283507] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24917c0) on tqpair(0x2428470): expected_datao=0, payload_size=3072 00:27:48.879 [2024-07-11 13:57:51.283536] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.283540] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.283595] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.879 [2024-07-11 13:57:51.283600] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.879 [2024-07-11 13:57:51.283603] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.283607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24917c0) on tqpair=0x2428470 00:27:48.879 [2024-07-11 13:57:51.283614] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.283618] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.283621] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2428470) 00:27:48.879 [2024-07-11 13:57:51.283626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.879 [2024-07-11 13:57:51.283639] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24917c0, cid 4, qid 0 00:27:48.879 [2024-07-11 13:57:51.283723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:48.879 [2024-07-11 13:57:51.283728] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:48.879 [2024-07-11 13:57:51.283731] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.283734] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2428470): datao=0, datal=8, cccid=4 00:27:48.879 [2024-07-11 13:57:51.283738] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24917c0) on tqpair(0x2428470): expected_datao=0, payload_size=8 00:27:48.879 [2024-07-11 13:57:51.283744] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.283747] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.324310] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.879 [2024-07-11 13:57:51.324322] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.879 [2024-07-11 13:57:51.324325] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.879 [2024-07-11 13:57:51.324328] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24917c0) on tqpair=0x2428470 00:27:48.879 ===================================================== 00:27:48.879 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:48.879 ===================================================== 00:27:48.879 Controller Capabilities/Features 00:27:48.879 ================================ 00:27:48.879 Vendor ID: 0000 00:27:48.879 Subsystem Vendor ID: 0000 00:27:48.879 Serial Number: .................... 00:27:48.879 Model Number: ........................................ 00:27:48.879 Firmware Version: 24.01.1 00:27:48.879 Recommended Arb Burst: 0 00:27:48.879 IEEE OUI Identifier: 00 00 00 00:27:48.879 Multi-path I/O 00:27:48.879 May have multiple subsystem ports: No 00:27:48.879 May have multiple controllers: No 00:27:48.879 Associated with SR-IOV VF: No 00:27:48.879 Max Data Transfer Size: 131072 00:27:48.879 Max Number of Namespaces: 0 00:27:48.880 Max Number of I/O Queues: 1024 00:27:48.880 NVMe Specification Version (VS): 1.3 00:27:48.880 NVMe Specification Version (Identify): 1.3 00:27:48.880 Maximum Queue Entries: 128 00:27:48.880 Contiguous Queues Required: Yes 00:27:48.880 Arbitration Mechanisms Supported 00:27:48.880 Weighted Round Robin: Not Supported 00:27:48.880 Vendor Specific: Not Supported 00:27:48.880 Reset Timeout: 15000 ms 00:27:48.880 Doorbell Stride: 4 bytes 00:27:48.880 NVM Subsystem Reset: Not Supported 00:27:48.880 Command Sets Supported 00:27:48.880 NVM Command Set: Supported 00:27:48.880 Boot Partition: Not Supported 00:27:48.880 Memory Page Size Minimum: 4096 bytes 00:27:48.880 Memory Page Size Maximum: 4096 bytes 00:27:48.880 Persistent Memory Region: Not Supported 00:27:48.880 Optional Asynchronous Events Supported 00:27:48.880 Namespace Attribute Notices: Not Supported 00:27:48.880 Firmware Activation Notices: Not Supported 00:27:48.880 ANA Change Notices: Not Supported 00:27:48.880 PLE Aggregate Log Change Notices: Not Supported 00:27:48.880 LBA Status Info Alert Notices: Not Supported 00:27:48.880 EGE Aggregate Log Change Notices: Not Supported 00:27:48.880 Normal NVM Subsystem Shutdown event: Not Supported 00:27:48.880 Zone Descriptor Change Notices: Not Supported 00:27:48.880 Discovery Log Change Notices: Supported 00:27:48.880 Controller Attributes 00:27:48.880 128-bit Host Identifier: Not Supported 00:27:48.880 Non-Operational Permissive Mode: Not Supported 00:27:48.880 NVM Sets: Not Supported 00:27:48.880 Read Recovery Levels: Not Supported 00:27:48.880 Endurance Groups: Not Supported 00:27:48.880 Predictable Latency Mode: Not Supported 00:27:48.880 Traffic Based Keep ALive: Not Supported 00:27:48.880 Namespace Granularity: Not Supported 00:27:48.880 SQ Associations: Not Supported 00:27:48.880 UUID List: Not Supported 00:27:48.880 Multi-Domain Subsystem: Not Supported 00:27:48.880 Fixed Capacity Management: Not Supported 00:27:48.880 Variable Capacity Management: Not Supported 00:27:48.880 Delete Endurance Group: Not Supported 00:27:48.880 Delete NVM Set: Not Supported 00:27:48.880 Extended LBA Formats Supported: Not Supported 00:27:48.880 Flexible Data Placement Supported: Not Supported 00:27:48.880 00:27:48.880 Controller Memory Buffer Support 00:27:48.880 ================================ 00:27:48.880 Supported: No 00:27:48.880 00:27:48.880 Persistent Memory Region Support 00:27:48.880 ================================ 00:27:48.880 Supported: No 00:27:48.880 00:27:48.880 Admin Command Set Attributes 00:27:48.880 ============================ 00:27:48.880 Security Send/Receive: Not Supported 00:27:48.880 Format NVM: Not Supported 00:27:48.880 Firmware Activate/Download: Not Supported 00:27:48.880 Namespace Management: Not Supported 00:27:48.880 Device Self-Test: Not Supported 00:27:48.880 Directives: Not Supported 00:27:48.880 NVMe-MI: Not Supported 00:27:48.880 Virtualization Management: Not Supported 00:27:48.880 Doorbell Buffer Config: Not Supported 00:27:48.880 Get LBA Status Capability: Not Supported 00:27:48.880 Command & Feature Lockdown Capability: Not Supported 00:27:48.880 Abort Command Limit: 1 00:27:48.880 Async Event Request Limit: 4 00:27:48.880 Number of Firmware Slots: N/A 00:27:48.880 Firmware Slot 1 Read-Only: N/A 00:27:48.880 Firmware Activation Without Reset: N/A 00:27:48.880 Multiple Update Detection Support: N/A 00:27:48.880 Firmware Update Granularity: No Information Provided 00:27:48.880 Per-Namespace SMART Log: No 00:27:48.880 Asymmetric Namespace Access Log Page: Not Supported 00:27:48.880 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:48.880 Command Effects Log Page: Not Supported 00:27:48.880 Get Log Page Extended Data: Supported 00:27:48.880 Telemetry Log Pages: Not Supported 00:27:48.880 Persistent Event Log Pages: Not Supported 00:27:48.880 Supported Log Pages Log Page: May Support 00:27:48.880 Commands Supported & Effects Log Page: Not Supported 00:27:48.880 Feature Identifiers & Effects Log Page:May Support 00:27:48.880 NVMe-MI Commands & Effects Log Page: May Support 00:27:48.880 Data Area 4 for Telemetry Log: Not Supported 00:27:48.880 Error Log Page Entries Supported: 128 00:27:48.880 Keep Alive: Not Supported 00:27:48.880 00:27:48.880 NVM Command Set Attributes 00:27:48.880 ========================== 00:27:48.880 Submission Queue Entry Size 00:27:48.880 Max: 1 00:27:48.880 Min: 1 00:27:48.880 Completion Queue Entry Size 00:27:48.880 Max: 1 00:27:48.880 Min: 1 00:27:48.880 Number of Namespaces: 0 00:27:48.880 Compare Command: Not Supported 00:27:48.880 Write Uncorrectable Command: Not Supported 00:27:48.880 Dataset Management Command: Not Supported 00:27:48.880 Write Zeroes Command: Not Supported 00:27:48.880 Set Features Save Field: Not Supported 00:27:48.880 Reservations: Not Supported 00:27:48.880 Timestamp: Not Supported 00:27:48.880 Copy: Not Supported 00:27:48.880 Volatile Write Cache: Not Present 00:27:48.880 Atomic Write Unit (Normal): 1 00:27:48.880 Atomic Write Unit (PFail): 1 00:27:48.880 Atomic Compare & Write Unit: 1 00:27:48.880 Fused Compare & Write: Supported 00:27:48.880 Scatter-Gather List 00:27:48.880 SGL Command Set: Supported 00:27:48.880 SGL Keyed: Supported 00:27:48.880 SGL Bit Bucket Descriptor: Not Supported 00:27:48.880 SGL Metadata Pointer: Not Supported 00:27:48.880 Oversized SGL: Not Supported 00:27:48.880 SGL Metadata Address: Not Supported 00:27:48.880 SGL Offset: Supported 00:27:48.880 Transport SGL Data Block: Not Supported 00:27:48.880 Replay Protected Memory Block: Not Supported 00:27:48.880 00:27:48.880 Firmware Slot Information 00:27:48.880 ========================= 00:27:48.880 Active slot: 0 00:27:48.880 00:27:48.880 00:27:48.880 Error Log 00:27:48.881 ========= 00:27:48.881 00:27:48.881 Active Namespaces 00:27:48.881 ================= 00:27:48.881 Discovery Log Page 00:27:48.881 ================== 00:27:48.881 Generation Counter: 2 00:27:48.881 Number of Records: 2 00:27:48.881 Record Format: 0 00:27:48.881 00:27:48.881 Discovery Log Entry 0 00:27:48.881 ---------------------- 00:27:48.881 Transport Type: 3 (TCP) 00:27:48.881 Address Family: 1 (IPv4) 00:27:48.881 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:48.881 Entry Flags: 00:27:48.881 Duplicate Returned Information: 1 00:27:48.881 Explicit Persistent Connection Support for Discovery: 1 00:27:48.881 Transport Requirements: 00:27:48.881 Secure Channel: Not Required 00:27:48.881 Port ID: 0 (0x0000) 00:27:48.881 Controller ID: 65535 (0xffff) 00:27:48.881 Admin Max SQ Size: 128 00:27:48.881 Transport Service Identifier: 4420 00:27:48.881 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:48.881 Transport Address: 10.0.0.2 00:27:48.881 Discovery Log Entry 1 00:27:48.881 ---------------------- 00:27:48.881 Transport Type: 3 (TCP) 00:27:48.881 Address Family: 1 (IPv4) 00:27:48.881 Subsystem Type: 2 (NVM Subsystem) 00:27:48.881 Entry Flags: 00:27:48.881 Duplicate Returned Information: 0 00:27:48.881 Explicit Persistent Connection Support for Discovery: 0 00:27:48.881 Transport Requirements: 00:27:48.881 Secure Channel: Not Required 00:27:48.881 Port ID: 0 (0x0000) 00:27:48.881 Controller ID: 65535 (0xffff) 00:27:48.881 Admin Max SQ Size: 128 00:27:48.881 Transport Service Identifier: 4420 00:27:48.881 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:48.881 Transport Address: 10.0.0.2 [2024-07-11 13:57:51.324411] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:48.881 [2024-07-11 13:57:51.324426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.881 [2024-07-11 13:57:51.324432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.881 [2024-07-11 13:57:51.324437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.881 [2024-07-11 13:57:51.324442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.881 [2024-07-11 13:57:51.324449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.324453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.324456] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.881 [2024-07-11 13:57:51.324463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.881 [2024-07-11 13:57:51.324476] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.881 [2024-07-11 13:57:51.324609] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.881 [2024-07-11 13:57:51.324615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.881 [2024-07-11 13:57:51.324618] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.324621] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:48.881 [2024-07-11 13:57:51.324627] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.324630] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.324633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.881 [2024-07-11 13:57:51.324639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.881 [2024-07-11 13:57:51.324652] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.881 [2024-07-11 13:57:51.324761] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.881 [2024-07-11 13:57:51.324766] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.881 [2024-07-11 13:57:51.324769] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.324773] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:48.881 [2024-07-11 13:57:51.324777] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:48.881 [2024-07-11 13:57:51.324781] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:48.881 [2024-07-11 13:57:51.324789] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.324792] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.324795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.881 [2024-07-11 13:57:51.324801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.881 [2024-07-11 13:57:51.324810] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.881 [2024-07-11 13:57:51.324885] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.881 [2024-07-11 13:57:51.324891] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.881 [2024-07-11 13:57:51.324894] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.324897] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:48.881 [2024-07-11 13:57:51.324906] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.324910] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.324915] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.881 [2024-07-11 13:57:51.324920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.881 [2024-07-11 13:57:51.324930] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.881 [2024-07-11 13:57:51.325010] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.881 [2024-07-11 13:57:51.325016] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.881 [2024-07-11 13:57:51.325019] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.325022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:48.881 [2024-07-11 13:57:51.325030] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.325034] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.881 [2024-07-11 13:57:51.325037] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.881 [2024-07-11 13:57:51.325042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.881 [2024-07-11 13:57:51.325051] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.881 [2024-07-11 13:57:51.325170] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.882 [2024-07-11 13:57:51.325176] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.882 [2024-07-11 13:57:51.325179] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325182] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:48.882 [2024-07-11 13:57:51.325190] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325194] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325197] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.882 [2024-07-11 13:57:51.325202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.882 [2024-07-11 13:57:51.325212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.882 [2024-07-11 13:57:51.325313] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.882 [2024-07-11 13:57:51.325318] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.882 [2024-07-11 13:57:51.325321] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325324] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:48.882 [2024-07-11 13:57:51.325333] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325336] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325339] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.882 [2024-07-11 13:57:51.325345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.882 [2024-07-11 13:57:51.325354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.882 [2024-07-11 13:57:51.325431] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.882 [2024-07-11 13:57:51.325437] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.882 [2024-07-11 13:57:51.325440] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325443] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:48.882 [2024-07-11 13:57:51.325451] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325455] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.882 [2024-07-11 13:57:51.325465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.882 [2024-07-11 13:57:51.325474] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.882 [2024-07-11 13:57:51.325564] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.882 [2024-07-11 13:57:51.325570] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.882 [2024-07-11 13:57:51.325573] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325576] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:48.882 [2024-07-11 13:57:51.325584] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325588] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325591] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.882 [2024-07-11 13:57:51.325596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.882 [2024-07-11 13:57:51.325605] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.882 [2024-07-11 13:57:51.325716] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.882 [2024-07-11 13:57:51.325722] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.882 [2024-07-11 13:57:51.325725] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325728] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:48.882 [2024-07-11 13:57:51.325736] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325743] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.882 [2024-07-11 13:57:51.325748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.882 [2024-07-11 13:57:51.325757] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.882 [2024-07-11 13:57:51.325867] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.882 [2024-07-11 13:57:51.325873] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.882 [2024-07-11 13:57:51.325876] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325879] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:48.882 [2024-07-11 13:57:51.325887] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325891] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325894] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.882 [2024-07-11 13:57:51.325899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.882 [2024-07-11 13:57:51.325908] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.882 [2024-07-11 13:57:51.325984] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.882 [2024-07-11 13:57:51.325989] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.882 [2024-07-11 13:57:51.325992] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.325996] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:48.882 [2024-07-11 13:57:51.326004] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.326008] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.326011] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.882 [2024-07-11 13:57:51.326017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.882 [2024-07-11 13:57:51.326027] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:48.882 [2024-07-11 13:57:51.326119] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:48.882 [2024-07-11 13:57:51.326125] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:48.882 [2024-07-11 13:57:51.326128] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.326131] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:48.882 [2024-07-11 13:57:51.326139] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.326143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:48.882 [2024-07-11 13:57:51.326146] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:48.882 [2024-07-11 13:57:51.326152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.148 [2024-07-11 13:57:51.330165] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:49.148 [2024-07-11 13:57:51.330177] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.148 [2024-07-11 13:57:51.330183] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.148 [2024-07-11 13:57:51.330186] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.330189] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:49.148 [2024-07-11 13:57:51.330200] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.330204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.330207] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2428470) 00:27:49.148 [2024-07-11 13:57:51.330213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.148 [2024-07-11 13:57:51.330224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2491660, cid 3, qid 0 00:27:49.148 [2024-07-11 13:57:51.330422] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.148 [2024-07-11 13:57:51.330428] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.148 [2024-07-11 13:57:51.330431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.330434] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2491660) on tqpair=0x2428470 00:27:49.148 [2024-07-11 13:57:51.330441] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:27:49.148 00:27:49.148 13:57:51 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:49.148 [2024-07-11 13:57:51.363734] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:49.148 [2024-07-11 13:57:51.363767] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724635 ] 00:27:49.148 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.148 [2024-07-11 13:57:51.391392] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:49.148 [2024-07-11 13:57:51.391426] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:49.148 [2024-07-11 13:57:51.391430] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:49.148 [2024-07-11 13:57:51.391441] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:49.148 [2024-07-11 13:57:51.391448] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:49.148 [2024-07-11 13:57:51.391779] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:49.148 [2024-07-11 13:57:51.391805] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x162c470 0 00:27:49.148 [2024-07-11 13:57:51.398172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:49.148 [2024-07-11 13:57:51.398184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:49.148 [2024-07-11 13:57:51.398187] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:49.148 [2024-07-11 13:57:51.398190] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:49.148 [2024-07-11 13:57:51.398219] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.398223] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.398227] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x162c470) 00:27:49.148 [2024-07-11 13:57:51.398236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:49.148 [2024-07-11 13:57:51.398250] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695240, cid 0, qid 0 00:27:49.148 [2024-07-11 13:57:51.406169] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.148 [2024-07-11 13:57:51.406177] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.148 [2024-07-11 13:57:51.406180] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406183] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695240) on tqpair=0x162c470 00:27:49.148 [2024-07-11 13:57:51.406192] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:49.148 [2024-07-11 13:57:51.406198] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:49.148 [2024-07-11 13:57:51.406202] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:49.148 [2024-07-11 13:57:51.406211] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406215] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406218] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x162c470) 00:27:49.148 [2024-07-11 13:57:51.406224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.148 [2024-07-11 13:57:51.406236] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695240, cid 0, qid 0 00:27:49.148 [2024-07-11 13:57:51.406412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.148 [2024-07-11 13:57:51.406418] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.148 [2024-07-11 13:57:51.406421] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406425] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695240) on tqpair=0x162c470 00:27:49.148 [2024-07-11 13:57:51.406430] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:49.148 [2024-07-11 13:57:51.406436] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:49.148 [2024-07-11 13:57:51.406442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x162c470) 00:27:49.148 [2024-07-11 13:57:51.406454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.148 [2024-07-11 13:57:51.406466] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695240, cid 0, qid 0 00:27:49.148 [2024-07-11 13:57:51.406543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.148 [2024-07-11 13:57:51.406549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.148 [2024-07-11 13:57:51.406552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406555] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695240) on tqpair=0x162c470 00:27:49.148 [2024-07-11 13:57:51.406560] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:49.148 [2024-07-11 13:57:51.406567] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:49.148 [2024-07-11 13:57:51.406573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406576] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406579] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x162c470) 00:27:49.148 [2024-07-11 13:57:51.406585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.148 [2024-07-11 13:57:51.406594] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695240, cid 0, qid 0 00:27:49.148 [2024-07-11 13:57:51.406676] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.148 [2024-07-11 13:57:51.406682] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.148 [2024-07-11 13:57:51.406684] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406687] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695240) on tqpair=0x162c470 00:27:49.148 [2024-07-11 13:57:51.406692] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:49.148 [2024-07-11 13:57:51.406701] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406704] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406707] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x162c470) 00:27:49.148 [2024-07-11 13:57:51.406713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.148 [2024-07-11 13:57:51.406722] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695240, cid 0, qid 0 00:27:49.148 [2024-07-11 13:57:51.406796] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.148 [2024-07-11 13:57:51.406801] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.148 [2024-07-11 13:57:51.406805] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406808] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695240) on tqpair=0x162c470 00:27:49.148 [2024-07-11 13:57:51.406812] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:49.148 [2024-07-11 13:57:51.406816] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:49.148 [2024-07-11 13:57:51.406822] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:49.148 [2024-07-11 13:57:51.406927] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:49.148 [2024-07-11 13:57:51.406930] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:49.148 [2024-07-11 13:57:51.406936] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406939] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.406942] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x162c470) 00:27:49.148 [2024-07-11 13:57:51.406950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.148 [2024-07-11 13:57:51.406960] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695240, cid 0, qid 0 00:27:49.148 [2024-07-11 13:57:51.407086] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.148 [2024-07-11 13:57:51.407091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.148 [2024-07-11 13:57:51.407094] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.407097] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695240) on tqpair=0x162c470 00:27:49.148 [2024-07-11 13:57:51.407101] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:49.148 [2024-07-11 13:57:51.407110] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.407113] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.148 [2024-07-11 13:57:51.407116] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x162c470) 00:27:49.148 [2024-07-11 13:57:51.407122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.149 [2024-07-11 13:57:51.407131] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695240, cid 0, qid 0 00:27:49.149 [2024-07-11 13:57:51.407246] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.149 [2024-07-11 13:57:51.407252] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.149 [2024-07-11 13:57:51.407255] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407258] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695240) on tqpair=0x162c470 00:27:49.149 [2024-07-11 13:57:51.407262] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:49.149 [2024-07-11 13:57:51.407266] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:49.149 [2024-07-11 13:57:51.407274] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:49.149 [2024-07-11 13:57:51.407280] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:49.149 [2024-07-11 13:57:51.407287] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407291] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407294] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x162c470) 00:27:49.149 [2024-07-11 13:57:51.407299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.149 [2024-07-11 13:57:51.407310] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695240, cid 0, qid 0 00:27:49.149 [2024-07-11 13:57:51.407446] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.149 [2024-07-11 13:57:51.407452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.149 [2024-07-11 13:57:51.407455] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407458] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x162c470): datao=0, datal=4096, cccid=0 00:27:49.149 [2024-07-11 13:57:51.407462] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1695240) on tqpair(0x162c470): expected_datao=0, payload_size=4096 00:27:49.149 [2024-07-11 13:57:51.407491] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407495] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.149 [2024-07-11 13:57:51.407552] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.149 [2024-07-11 13:57:51.407557] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407560] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695240) on tqpair=0x162c470 00:27:49.149 [2024-07-11 13:57:51.407567] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:49.149 [2024-07-11 13:57:51.407571] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:49.149 [2024-07-11 13:57:51.407575] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:49.149 [2024-07-11 13:57:51.407579] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:49.149 [2024-07-11 13:57:51.407582] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:49.149 [2024-07-11 13:57:51.407586] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:49.149 [2024-07-11 13:57:51.407596] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:49.149 [2024-07-11 13:57:51.407602] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407606] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407608] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x162c470) 00:27:49.149 [2024-07-11 13:57:51.407614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:49.149 [2024-07-11 13:57:51.407624] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695240, cid 0, qid 0 00:27:49.149 [2024-07-11 13:57:51.407701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.149 [2024-07-11 13:57:51.407707] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.149 [2024-07-11 13:57:51.407710] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407713] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695240) on tqpair=0x162c470 00:27:49.149 [2024-07-11 13:57:51.407719] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407722] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407725] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x162c470) 00:27:49.149 [2024-07-11 13:57:51.407730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.149 [2024-07-11 13:57:51.407735] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407738] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407741] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x162c470) 00:27:49.149 [2024-07-11 13:57:51.407746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.149 [2024-07-11 13:57:51.407751] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407757] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x162c470) 00:27:49.149 [2024-07-11 13:57:51.407762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.149 [2024-07-11 13:57:51.407767] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407770] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407773] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x162c470) 00:27:49.149 [2024-07-11 13:57:51.407777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.149 [2024-07-11 13:57:51.407783] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:49.149 [2024-07-11 13:57:51.407792] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:49.149 [2024-07-11 13:57:51.407798] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407801] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x162c470) 00:27:49.149 [2024-07-11 13:57:51.407809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.149 [2024-07-11 13:57:51.407820] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695240, cid 0, qid 0 00:27:49.149 [2024-07-11 13:57:51.407825] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16953a0, cid 1, qid 0 00:27:49.149 [2024-07-11 13:57:51.407829] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695500, cid 2, qid 0 00:27:49.149 [2024-07-11 13:57:51.407833] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695660, cid 3, qid 0 00:27:49.149 [2024-07-11 13:57:51.407836] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16957c0, cid 4, qid 0 00:27:49.149 [2024-07-11 13:57:51.407950] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.149 [2024-07-11 13:57:51.407956] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.149 [2024-07-11 13:57:51.407959] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407962] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16957c0) on tqpair=0x162c470 00:27:49.149 [2024-07-11 13:57:51.407967] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:49.149 [2024-07-11 13:57:51.407971] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:49.149 [2024-07-11 13:57:51.407977] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:49.149 [2024-07-11 13:57:51.407984] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:49.149 [2024-07-11 13:57:51.407990] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407993] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.407996] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x162c470) 00:27:49.149 [2024-07-11 13:57:51.408001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:49.149 [2024-07-11 13:57:51.408011] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16957c0, cid 4, qid 0 00:27:49.149 [2024-07-11 13:57:51.408088] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.149 [2024-07-11 13:57:51.408094] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.149 [2024-07-11 13:57:51.408097] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.408100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16957c0) on tqpair=0x162c470 00:27:49.149 [2024-07-11 13:57:51.408151] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:49.149 [2024-07-11 13:57:51.408165] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:49.149 [2024-07-11 13:57:51.408172] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.408177] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.408180] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x162c470) 00:27:49.149 [2024-07-11 13:57:51.408185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.149 [2024-07-11 13:57:51.408195] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16957c0, cid 4, qid 0 00:27:49.149 [2024-07-11 13:57:51.408280] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.149 [2024-07-11 13:57:51.408286] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.149 [2024-07-11 13:57:51.408289] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.408292] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x162c470): datao=0, datal=4096, cccid=4 00:27:49.149 [2024-07-11 13:57:51.408295] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16957c0) on tqpair(0x162c470): expected_datao=0, payload_size=4096 00:27:49.149 [2024-07-11 13:57:51.408322] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.408325] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.449236] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.149 [2024-07-11 13:57:51.449251] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.149 [2024-07-11 13:57:51.449257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.149 [2024-07-11 13:57:51.449262] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16957c0) on tqpair=0x162c470 00:27:49.149 [2024-07-11 13:57:51.449278] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:49.149 [2024-07-11 13:57:51.449292] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:49.149 [2024-07-11 13:57:51.449302] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:49.150 [2024-07-11 13:57:51.449309] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.449312] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.449316] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x162c470) 00:27:49.150 [2024-07-11 13:57:51.449323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.150 [2024-07-11 13:57:51.449336] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16957c0, cid 4, qid 0 00:27:49.150 [2024-07-11 13:57:51.449449] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.150 [2024-07-11 13:57:51.449455] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.150 [2024-07-11 13:57:51.449458] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.449461] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x162c470): datao=0, datal=4096, cccid=4 00:27:49.150 [2024-07-11 13:57:51.449465] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16957c0) on tqpair(0x162c470): expected_datao=0, payload_size=4096 00:27:49.150 [2024-07-11 13:57:51.449472] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.449475] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490304] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.150 [2024-07-11 13:57:51.490315] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.150 [2024-07-11 13:57:51.490318] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490322] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16957c0) on tqpair=0x162c470 00:27:49.150 [2024-07-11 13:57:51.490336] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:49.150 [2024-07-11 13:57:51.490347] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:49.150 [2024-07-11 13:57:51.490355] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490358] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490361] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x162c470) 00:27:49.150 [2024-07-11 13:57:51.490368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.150 [2024-07-11 13:57:51.490378] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16957c0, cid 4, qid 0 00:27:49.150 [2024-07-11 13:57:51.490469] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.150 [2024-07-11 13:57:51.490475] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.150 [2024-07-11 13:57:51.490478] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490481] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x162c470): datao=0, datal=4096, cccid=4 00:27:49.150 [2024-07-11 13:57:51.490485] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16957c0) on tqpair(0x162c470): expected_datao=0, payload_size=4096 00:27:49.150 [2024-07-11 13:57:51.490491] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490495] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490521] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.150 [2024-07-11 13:57:51.490527] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.150 [2024-07-11 13:57:51.490530] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490533] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16957c0) on tqpair=0x162c470 00:27:49.150 [2024-07-11 13:57:51.490540] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:49.150 [2024-07-11 13:57:51.490547] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:49.150 [2024-07-11 13:57:51.490554] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:49.150 [2024-07-11 13:57:51.490559] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:49.150 [2024-07-11 13:57:51.490563] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:49.150 [2024-07-11 13:57:51.490567] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:49.150 [2024-07-11 13:57:51.490571] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:49.150 [2024-07-11 13:57:51.490576] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:49.150 [2024-07-11 13:57:51.490588] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490591] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490594] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x162c470) 00:27:49.150 [2024-07-11 13:57:51.490600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.150 [2024-07-11 13:57:51.490606] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490609] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490611] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x162c470) 00:27:49.150 [2024-07-11 13:57:51.490628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:49.150 [2024-07-11 13:57:51.490640] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16957c0, cid 4, qid 0 00:27:49.150 [2024-07-11 13:57:51.490645] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695920, cid 5, qid 0 00:27:49.150 [2024-07-11 13:57:51.490741] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.150 [2024-07-11 13:57:51.490746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.150 [2024-07-11 13:57:51.490749] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490752] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16957c0) on tqpair=0x162c470 00:27:49.150 [2024-07-11 13:57:51.490759] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.150 [2024-07-11 13:57:51.490764] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.150 [2024-07-11 13:57:51.490766] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490769] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695920) on tqpair=0x162c470 00:27:49.150 [2024-07-11 13:57:51.490778] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490781] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490785] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x162c470) 00:27:49.150 [2024-07-11 13:57:51.490790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.150 [2024-07-11 13:57:51.490799] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695920, cid 5, qid 0 00:27:49.150 [2024-07-11 13:57:51.490874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.150 [2024-07-11 13:57:51.490880] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.150 [2024-07-11 13:57:51.490883] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490886] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695920) on tqpair=0x162c470 00:27:49.150 [2024-07-11 13:57:51.490894] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.490900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x162c470) 00:27:49.150 [2024-07-11 13:57:51.490906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.150 [2024-07-11 13:57:51.490915] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695920, cid 5, qid 0 00:27:49.150 [2024-07-11 13:57:51.491006] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.150 [2024-07-11 13:57:51.491012] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.150 [2024-07-11 13:57:51.491016] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.491019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695920) on tqpair=0x162c470 00:27:49.150 [2024-07-11 13:57:51.491027] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.491030] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.491033] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x162c470) 00:27:49.150 [2024-07-11 13:57:51.491039] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.150 [2024-07-11 13:57:51.491048] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695920, cid 5, qid 0 00:27:49.150 [2024-07-11 13:57:51.491121] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.150 [2024-07-11 13:57:51.491126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.150 [2024-07-11 13:57:51.491131] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.491134] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695920) on tqpair=0x162c470 00:27:49.150 [2024-07-11 13:57:51.491145] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.491148] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.491151] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x162c470) 00:27:49.150 [2024-07-11 13:57:51.491157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.150 [2024-07-11 13:57:51.491171] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.491174] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.491177] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x162c470) 00:27:49.150 [2024-07-11 13:57:51.491182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.150 [2024-07-11 13:57:51.491188] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.491191] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.491194] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x162c470) 00:27:49.150 [2024-07-11 13:57:51.491199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.150 [2024-07-11 13:57:51.491205] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.491208] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.150 [2024-07-11 13:57:51.491211] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x162c470) 00:27:49.150 [2024-07-11 13:57:51.491216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.150 [2024-07-11 13:57:51.491226] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695920, cid 5, qid 0 00:27:49.150 [2024-07-11 13:57:51.491231] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16957c0, cid 4, qid 0 00:27:49.150 [2024-07-11 13:57:51.491235] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695a80, cid 6, qid 0 00:27:49.150 [2024-07-11 13:57:51.491239] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695be0, cid 7, qid 0 00:27:49.150 [2024-07-11 13:57:51.491427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.150 [2024-07-11 13:57:51.491434] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.151 [2024-07-11 13:57:51.491436] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491440] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x162c470): datao=0, datal=8192, cccid=5 00:27:49.151 [2024-07-11 13:57:51.491444] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1695920) on tqpair(0x162c470): expected_datao=0, payload_size=8192 00:27:49.151 [2024-07-11 13:57:51.491450] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491453] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491458] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.151 [2024-07-11 13:57:51.491463] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.151 [2024-07-11 13:57:51.491466] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491469] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x162c470): datao=0, datal=512, cccid=4 00:27:49.151 [2024-07-11 13:57:51.491472] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16957c0) on tqpair(0x162c470): expected_datao=0, payload_size=512 00:27:49.151 [2024-07-11 13:57:51.491480] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491483] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491488] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.151 [2024-07-11 13:57:51.491493] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.151 [2024-07-11 13:57:51.491496] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491498] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x162c470): datao=0, datal=512, cccid=6 00:27:49.151 [2024-07-11 13:57:51.491502] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1695a80) on tqpair(0x162c470): expected_datao=0, payload_size=512 00:27:49.151 [2024-07-11 13:57:51.491508] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491510] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491515] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:49.151 [2024-07-11 13:57:51.491520] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:49.151 [2024-07-11 13:57:51.491523] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491526] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x162c470): datao=0, datal=4096, cccid=7 00:27:49.151 [2024-07-11 13:57:51.491530] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1695be0) on tqpair(0x162c470): expected_datao=0, payload_size=4096 00:27:49.151 [2024-07-11 13:57:51.491535] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491538] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.151 [2024-07-11 13:57:51.491551] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.151 [2024-07-11 13:57:51.491554] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491557] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695920) on tqpair=0x162c470 00:27:49.151 [2024-07-11 13:57:51.491569] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.151 [2024-07-11 13:57:51.491574] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.151 [2024-07-11 13:57:51.491577] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491580] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x16957c0) on tqpair=0x162c470 00:27:49.151 [2024-07-11 13:57:51.491588] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.151 [2024-07-11 13:57:51.491593] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.151 [2024-07-11 13:57:51.491596] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491599] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695a80) on tqpair=0x162c470 00:27:49.151 [2024-07-11 13:57:51.491605] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.151 [2024-07-11 13:57:51.491610] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.151 [2024-07-11 13:57:51.491613] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.151 [2024-07-11 13:57:51.491616] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695be0) on tqpair=0x162c470 00:27:49.151 ===================================================== 00:27:49.151 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:49.151 ===================================================== 00:27:49.151 Controller Capabilities/Features 00:27:49.151 ================================ 00:27:49.151 Vendor ID: 8086 00:27:49.151 Subsystem Vendor ID: 8086 00:27:49.151 Serial Number: SPDK00000000000001 00:27:49.151 Model Number: SPDK bdev Controller 00:27:49.151 Firmware Version: 24.01.1 00:27:49.151 Recommended Arb Burst: 6 00:27:49.151 IEEE OUI Identifier: e4 d2 5c 00:27:49.151 Multi-path I/O 00:27:49.151 May have multiple subsystem ports: Yes 00:27:49.151 May have multiple controllers: Yes 00:27:49.151 Associated with SR-IOV VF: No 00:27:49.151 Max Data Transfer Size: 131072 00:27:49.151 Max Number of Namespaces: 32 00:27:49.151 Max Number of I/O Queues: 127 00:27:49.151 NVMe Specification Version (VS): 1.3 00:27:49.151 NVMe Specification Version (Identify): 1.3 00:27:49.151 Maximum Queue Entries: 128 00:27:49.151 Contiguous Queues Required: Yes 00:27:49.151 Arbitration Mechanisms Supported 00:27:49.151 Weighted Round Robin: Not Supported 00:27:49.151 Vendor Specific: Not Supported 00:27:49.151 Reset Timeout: 15000 ms 00:27:49.151 Doorbell Stride: 4 bytes 00:27:49.151 NVM Subsystem Reset: Not Supported 00:27:49.151 Command Sets Supported 00:27:49.151 NVM Command Set: Supported 00:27:49.151 Boot Partition: Not Supported 00:27:49.151 Memory Page Size Minimum: 4096 bytes 00:27:49.151 Memory Page Size Maximum: 4096 bytes 00:27:49.151 Persistent Memory Region: Not Supported 00:27:49.151 Optional Asynchronous Events Supported 00:27:49.151 Namespace Attribute Notices: Supported 00:27:49.151 Firmware Activation Notices: Not Supported 00:27:49.151 ANA Change Notices: Not Supported 00:27:49.151 PLE Aggregate Log Change Notices: Not Supported 00:27:49.151 LBA Status Info Alert Notices: Not Supported 00:27:49.151 EGE Aggregate Log Change Notices: Not Supported 00:27:49.151 Normal NVM Subsystem Shutdown event: Not Supported 00:27:49.151 Zone Descriptor Change Notices: Not Supported 00:27:49.151 Discovery Log Change Notices: Not Supported 00:27:49.151 Controller Attributes 00:27:49.151 128-bit Host Identifier: Supported 00:27:49.151 Non-Operational Permissive Mode: Not Supported 00:27:49.151 NVM Sets: Not Supported 00:27:49.151 Read Recovery Levels: Not Supported 00:27:49.151 Endurance Groups: Not Supported 00:27:49.151 Predictable Latency Mode: Not Supported 00:27:49.151 Traffic Based Keep ALive: Not Supported 00:27:49.151 Namespace Granularity: Not Supported 00:27:49.151 SQ Associations: Not Supported 00:27:49.151 UUID List: Not Supported 00:27:49.151 Multi-Domain Subsystem: Not Supported 00:27:49.151 Fixed Capacity Management: Not Supported 00:27:49.151 Variable Capacity Management: Not Supported 00:27:49.151 Delete Endurance Group: Not Supported 00:27:49.151 Delete NVM Set: Not Supported 00:27:49.151 Extended LBA Formats Supported: Not Supported 00:27:49.151 Flexible Data Placement Supported: Not Supported 00:27:49.151 00:27:49.151 Controller Memory Buffer Support 00:27:49.151 ================================ 00:27:49.151 Supported: No 00:27:49.151 00:27:49.151 Persistent Memory Region Support 00:27:49.151 ================================ 00:27:49.151 Supported: No 00:27:49.151 00:27:49.151 Admin Command Set Attributes 00:27:49.151 ============================ 00:27:49.151 Security Send/Receive: Not Supported 00:27:49.151 Format NVM: Not Supported 00:27:49.151 Firmware Activate/Download: Not Supported 00:27:49.151 Namespace Management: Not Supported 00:27:49.151 Device Self-Test: Not Supported 00:27:49.151 Directives: Not Supported 00:27:49.151 NVMe-MI: Not Supported 00:27:49.151 Virtualization Management: Not Supported 00:27:49.151 Doorbell Buffer Config: Not Supported 00:27:49.151 Get LBA Status Capability: Not Supported 00:27:49.151 Command & Feature Lockdown Capability: Not Supported 00:27:49.151 Abort Command Limit: 4 00:27:49.151 Async Event Request Limit: 4 00:27:49.151 Number of Firmware Slots: N/A 00:27:49.151 Firmware Slot 1 Read-Only: N/A 00:27:49.151 Firmware Activation Without Reset: N/A 00:27:49.151 Multiple Update Detection Support: N/A 00:27:49.151 Firmware Update Granularity: No Information Provided 00:27:49.151 Per-Namespace SMART Log: No 00:27:49.151 Asymmetric Namespace Access Log Page: Not Supported 00:27:49.151 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:49.151 Command Effects Log Page: Supported 00:27:49.151 Get Log Page Extended Data: Supported 00:27:49.151 Telemetry Log Pages: Not Supported 00:27:49.151 Persistent Event Log Pages: Not Supported 00:27:49.151 Supported Log Pages Log Page: May Support 00:27:49.151 Commands Supported & Effects Log Page: Not Supported 00:27:49.151 Feature Identifiers & Effects Log Page:May Support 00:27:49.151 NVMe-MI Commands & Effects Log Page: May Support 00:27:49.151 Data Area 4 for Telemetry Log: Not Supported 00:27:49.151 Error Log Page Entries Supported: 128 00:27:49.151 Keep Alive: Supported 00:27:49.151 Keep Alive Granularity: 10000 ms 00:27:49.151 00:27:49.151 NVM Command Set Attributes 00:27:49.151 ========================== 00:27:49.151 Submission Queue Entry Size 00:27:49.151 Max: 64 00:27:49.151 Min: 64 00:27:49.151 Completion Queue Entry Size 00:27:49.151 Max: 16 00:27:49.151 Min: 16 00:27:49.151 Number of Namespaces: 32 00:27:49.151 Compare Command: Supported 00:27:49.151 Write Uncorrectable Command: Not Supported 00:27:49.151 Dataset Management Command: Supported 00:27:49.151 Write Zeroes Command: Supported 00:27:49.151 Set Features Save Field: Not Supported 00:27:49.151 Reservations: Supported 00:27:49.151 Timestamp: Not Supported 00:27:49.151 Copy: Supported 00:27:49.151 Volatile Write Cache: Present 00:27:49.151 Atomic Write Unit (Normal): 1 00:27:49.151 Atomic Write Unit (PFail): 1 00:27:49.151 Atomic Compare & Write Unit: 1 00:27:49.151 Fused Compare & Write: Supported 00:27:49.151 Scatter-Gather List 00:27:49.151 SGL Command Set: Supported 00:27:49.151 SGL Keyed: Supported 00:27:49.151 SGL Bit Bucket Descriptor: Not Supported 00:27:49.152 SGL Metadata Pointer: Not Supported 00:27:49.152 Oversized SGL: Not Supported 00:27:49.152 SGL Metadata Address: Not Supported 00:27:49.152 SGL Offset: Supported 00:27:49.152 Transport SGL Data Block: Not Supported 00:27:49.152 Replay Protected Memory Block: Not Supported 00:27:49.152 00:27:49.152 Firmware Slot Information 00:27:49.152 ========================= 00:27:49.152 Active slot: 1 00:27:49.152 Slot 1 Firmware Revision: 24.01.1 00:27:49.152 00:27:49.152 00:27:49.152 Commands Supported and Effects 00:27:49.152 ============================== 00:27:49.152 Admin Commands 00:27:49.152 -------------- 00:27:49.152 Get Log Page (02h): Supported 00:27:49.152 Identify (06h): Supported 00:27:49.152 Abort (08h): Supported 00:27:49.152 Set Features (09h): Supported 00:27:49.152 Get Features (0Ah): Supported 00:27:49.152 Asynchronous Event Request (0Ch): Supported 00:27:49.152 Keep Alive (18h): Supported 00:27:49.152 I/O Commands 00:27:49.152 ------------ 00:27:49.152 Flush (00h): Supported LBA-Change 00:27:49.152 Write (01h): Supported LBA-Change 00:27:49.152 Read (02h): Supported 00:27:49.152 Compare (05h): Supported 00:27:49.152 Write Zeroes (08h): Supported LBA-Change 00:27:49.152 Dataset Management (09h): Supported LBA-Change 00:27:49.152 Copy (19h): Supported LBA-Change 00:27:49.152 Unknown (79h): Supported LBA-Change 00:27:49.152 Unknown (7Ah): Supported 00:27:49.152 00:27:49.152 Error Log 00:27:49.152 ========= 00:27:49.152 00:27:49.152 Arbitration 00:27:49.152 =========== 00:27:49.152 Arbitration Burst: 1 00:27:49.152 00:27:49.152 Power Management 00:27:49.152 ================ 00:27:49.152 Number of Power States: 1 00:27:49.152 Current Power State: Power State #0 00:27:49.152 Power State #0: 00:27:49.152 Max Power: 0.00 W 00:27:49.152 Non-Operational State: Operational 00:27:49.152 Entry Latency: Not Reported 00:27:49.152 Exit Latency: Not Reported 00:27:49.152 Relative Read Throughput: 0 00:27:49.152 Relative Read Latency: 0 00:27:49.152 Relative Write Throughput: 0 00:27:49.152 Relative Write Latency: 0 00:27:49.152 Idle Power: Not Reported 00:27:49.152 Active Power: Not Reported 00:27:49.152 Non-Operational Permissive Mode: Not Supported 00:27:49.152 00:27:49.152 Health Information 00:27:49.152 ================== 00:27:49.152 Critical Warnings: 00:27:49.152 Available Spare Space: OK 00:27:49.152 Temperature: OK 00:27:49.152 Device Reliability: OK 00:27:49.152 Read Only: No 00:27:49.152 Volatile Memory Backup: OK 00:27:49.152 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:49.152 Temperature Threshold: [2024-07-11 13:57:51.491710] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.491715] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.491718] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x162c470) 00:27:49.152 [2024-07-11 13:57:51.491724] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.152 [2024-07-11 13:57:51.491736] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695be0, cid 7, qid 0 00:27:49.152 [2024-07-11 13:57:51.491853] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.152 [2024-07-11 13:57:51.491860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.152 [2024-07-11 13:57:51.491863] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.491866] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695be0) on tqpair=0x162c470 00:27:49.152 [2024-07-11 13:57:51.491894] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:49.152 [2024-07-11 13:57:51.491904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.152 [2024-07-11 13:57:51.491909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.152 [2024-07-11 13:57:51.491914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.152 [2024-07-11 13:57:51.491919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.152 [2024-07-11 13:57:51.491925] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.491929] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.491932] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x162c470) 00:27:49.152 [2024-07-11 13:57:51.491937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.152 [2024-07-11 13:57:51.491948] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695660, cid 3, qid 0 00:27:49.152 [2024-07-11 13:57:51.492034] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.152 [2024-07-11 13:57:51.492040] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.152 [2024-07-11 13:57:51.492043] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.492046] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695660) on tqpair=0x162c470 00:27:49.152 [2024-07-11 13:57:51.492052] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.492055] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.492058] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x162c470) 00:27:49.152 [2024-07-11 13:57:51.492064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.152 [2024-07-11 13:57:51.492076] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695660, cid 3, qid 0 00:27:49.152 [2024-07-11 13:57:51.492176] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.152 [2024-07-11 13:57:51.492182] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.152 [2024-07-11 13:57:51.492185] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.492188] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695660) on tqpair=0x162c470 00:27:49.152 [2024-07-11 13:57:51.492193] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:49.152 [2024-07-11 13:57:51.492197] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:49.152 [2024-07-11 13:57:51.492206] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.492209] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.492212] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x162c470) 00:27:49.152 [2024-07-11 13:57:51.492218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.152 [2024-07-11 13:57:51.492227] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695660, cid 3, qid 0 00:27:49.152 [2024-07-11 13:57:51.492302] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.152 [2024-07-11 13:57:51.492308] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.152 [2024-07-11 13:57:51.492313] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.492316] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695660) on tqpair=0x162c470 00:27:49.152 [2024-07-11 13:57:51.492325] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.492328] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.492331] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x162c470) 00:27:49.152 [2024-07-11 13:57:51.492337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.152 [2024-07-11 13:57:51.492346] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695660, cid 3, qid 0 00:27:49.152 [2024-07-11 13:57:51.492419] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.152 [2024-07-11 13:57:51.492425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.152 [2024-07-11 13:57:51.492428] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.492431] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695660) on tqpair=0x162c470 00:27:49.152 [2024-07-11 13:57:51.492439] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.492443] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.152 [2024-07-11 13:57:51.492446] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x162c470) 00:27:49.152 [2024-07-11 13:57:51.492451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.152 [2024-07-11 13:57:51.492460] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695660, cid 3, qid 0 00:27:49.153 [2024-07-11 13:57:51.492536] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.153 [2024-07-11 13:57:51.492542] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.153 [2024-07-11 13:57:51.492544] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.492547] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695660) on tqpair=0x162c470 00:27:49.153 [2024-07-11 13:57:51.492556] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.492559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.492562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x162c470) 00:27:49.153 [2024-07-11 13:57:51.492568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.153 [2024-07-11 13:57:51.492577] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695660, cid 3, qid 0 00:27:49.153 [2024-07-11 13:57:51.492652] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.153 [2024-07-11 13:57:51.492659] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.153 [2024-07-11 13:57:51.492661] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.492665] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695660) on tqpair=0x162c470 00:27:49.153 [2024-07-11 13:57:51.492673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.492677] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.492680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x162c470) 00:27:49.153 [2024-07-11 13:57:51.492685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.153 [2024-07-11 13:57:51.492694] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695660, cid 3, qid 0 00:27:49.153 [2024-07-11 13:57:51.492765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.153 [2024-07-11 13:57:51.492771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.153 [2024-07-11 13:57:51.492775] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.492778] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695660) on tqpair=0x162c470 00:27:49.153 [2024-07-11 13:57:51.492787] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.492790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.492793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x162c470) 00:27:49.153 [2024-07-11 13:57:51.492799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.153 [2024-07-11 13:57:51.492808] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695660, cid 3, qid 0 00:27:49.153 [2024-07-11 13:57:51.492887] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.153 [2024-07-11 13:57:51.492893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.153 [2024-07-11 13:57:51.492896] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.492899] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695660) on tqpair=0x162c470 00:27:49.153 [2024-07-11 13:57:51.492908] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.492911] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.492914] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x162c470) 00:27:49.153 [2024-07-11 13:57:51.492920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.153 [2024-07-11 13:57:51.492929] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695660, cid 3, qid 0 00:27:49.153 [2024-07-11 13:57:51.493008] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.153 [2024-07-11 13:57:51.493014] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.153 [2024-07-11 13:57:51.493017] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.493020] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695660) on tqpair=0x162c470 00:27:49.153 [2024-07-11 13:57:51.493028] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.493032] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.493035] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x162c470) 00:27:49.153 [2024-07-11 13:57:51.493040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.153 [2024-07-11 13:57:51.493049] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695660, cid 3, qid 0 00:27:49.153 [2024-07-11 13:57:51.493125] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.153 [2024-07-11 13:57:51.493131] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.153 [2024-07-11 13:57:51.493134] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.493137] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695660) on tqpair=0x162c470 00:27:49.153 [2024-07-11 13:57:51.493145] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.493149] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.493152] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x162c470) 00:27:49.153 [2024-07-11 13:57:51.493158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.153 [2024-07-11 13:57:51.497212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1695660, cid 3, qid 0 00:27:49.153 [2024-07-11 13:57:51.497354] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:49.153 [2024-07-11 13:57:51.497360] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:49.153 [2024-07-11 13:57:51.497363] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:49.153 [2024-07-11 13:57:51.497369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1695660) on tqpair=0x162c470 00:27:49.153 [2024-07-11 13:57:51.497377] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:27:49.153 0 Kelvin (-273 Celsius) 00:27:49.153 Available Spare: 0% 00:27:49.153 Available Spare Threshold: 0% 00:27:49.153 Life Percentage Used: 0% 00:27:49.153 Data Units Read: 0 00:27:49.153 Data Units Written: 0 00:27:49.153 Host Read Commands: 0 00:27:49.153 Host Write Commands: 0 00:27:49.153 Controller Busy Time: 0 minutes 00:27:49.153 Power Cycles: 0 00:27:49.153 Power On Hours: 0 hours 00:27:49.153 Unsafe Shutdowns: 0 00:27:49.153 Unrecoverable Media Errors: 0 00:27:49.153 Lifetime Error Log Entries: 0 00:27:49.153 Warning Temperature Time: 0 minutes 00:27:49.153 Critical Temperature Time: 0 minutes 00:27:49.153 00:27:49.153 Number of Queues 00:27:49.153 ================ 00:27:49.153 Number of I/O Submission Queues: 127 00:27:49.153 Number of I/O Completion Queues: 127 00:27:49.153 00:27:49.153 Active Namespaces 00:27:49.153 ================= 00:27:49.153 Namespace ID:1 00:27:49.153 Error Recovery Timeout: Unlimited 00:27:49.153 Command Set Identifier: NVM (00h) 00:27:49.153 Deallocate: Supported 00:27:49.153 Deallocated/Unwritten Error: Not Supported 00:27:49.153 Deallocated Read Value: Unknown 00:27:49.153 Deallocate in Write Zeroes: Not Supported 00:27:49.153 Deallocated Guard Field: 0xFFFF 00:27:49.153 Flush: Supported 00:27:49.153 Reservation: Supported 00:27:49.153 Namespace Sharing Capabilities: Multiple Controllers 00:27:49.153 Size (in LBAs): 131072 (0GiB) 00:27:49.153 Capacity (in LBAs): 131072 (0GiB) 00:27:49.153 Utilization (in LBAs): 131072 (0GiB) 00:27:49.153 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:49.153 EUI64: ABCDEF0123456789 00:27:49.153 UUID: c2370df6-85f3-457f-8961-ec6c159f361b 00:27:49.153 Thin Provisioning: Not Supported 00:27:49.153 Per-NS Atomic Units: Yes 00:27:49.153 Atomic Boundary Size (Normal): 0 00:27:49.153 Atomic Boundary Size (PFail): 0 00:27:49.153 Atomic Boundary Offset: 0 00:27:49.153 Maximum Single Source Range Length: 65535 00:27:49.153 Maximum Copy Length: 65535 00:27:49.153 Maximum Source Range Count: 1 00:27:49.153 NGUID/EUI64 Never Reused: No 00:27:49.153 Namespace Write Protected: No 00:27:49.153 Number of LBA Formats: 1 00:27:49.153 Current LBA Format: LBA Format #00 00:27:49.153 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:49.153 00:27:49.153 13:57:51 -- host/identify.sh@51 -- # sync 00:27:49.153 13:57:51 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.153 13:57:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.153 13:57:51 -- common/autotest_common.sh@10 -- # set +x 00:27:49.153 13:57:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.153 13:57:51 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:49.153 13:57:51 -- host/identify.sh@56 -- # nvmftestfini 00:27:49.153 13:57:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:49.153 13:57:51 -- nvmf/common.sh@116 -- # sync 00:27:49.153 13:57:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:49.153 13:57:51 -- nvmf/common.sh@119 -- # set +e 00:27:49.153 13:57:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:49.153 13:57:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:49.153 rmmod nvme_tcp 00:27:49.153 rmmod nvme_fabrics 00:27:49.153 rmmod nvme_keyring 00:27:49.153 13:57:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:49.153 13:57:51 -- nvmf/common.sh@123 -- # set -e 00:27:49.153 13:57:51 -- nvmf/common.sh@124 -- # return 0 00:27:49.153 13:57:51 -- nvmf/common.sh@477 -- # '[' -n 1724484 ']' 00:27:49.153 13:57:51 -- nvmf/common.sh@478 -- # killprocess 1724484 00:27:49.153 13:57:51 -- common/autotest_common.sh@926 -- # '[' -z 1724484 ']' 00:27:49.153 13:57:51 -- common/autotest_common.sh@930 -- # kill -0 1724484 00:27:49.153 13:57:51 -- common/autotest_common.sh@931 -- # uname 00:27:49.153 13:57:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:49.153 13:57:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1724484 00:27:49.413 13:57:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:49.413 13:57:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:49.413 13:57:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1724484' 00:27:49.413 killing process with pid 1724484 00:27:49.413 13:57:51 -- common/autotest_common.sh@945 -- # kill 1724484 00:27:49.413 [2024-07-11 13:57:51.619385] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:49.413 13:57:51 -- common/autotest_common.sh@950 -- # wait 1724484 00:27:49.413 13:57:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:49.413 13:57:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:49.413 13:57:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:49.413 13:57:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:49.413 13:57:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:49.413 13:57:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.413 13:57:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.413 13:57:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.956 13:57:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:51.956 00:27:51.956 real 0m8.931s 00:27:51.956 user 0m7.213s 00:27:51.956 sys 0m4.208s 00:27:51.956 13:57:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:51.956 13:57:53 -- common/autotest_common.sh@10 -- # set +x 00:27:51.956 ************************************ 00:27:51.956 END TEST nvmf_identify 00:27:51.956 ************************************ 00:27:51.956 13:57:53 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:51.956 13:57:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:51.956 13:57:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:51.956 13:57:53 -- common/autotest_common.sh@10 -- # set +x 00:27:51.956 ************************************ 00:27:51.956 START TEST nvmf_perf 00:27:51.956 ************************************ 00:27:51.956 13:57:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:51.956 * Looking for test storage... 00:27:51.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:51.956 13:57:53 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.956 13:57:53 -- nvmf/common.sh@7 -- # uname -s 00:27:51.956 13:57:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.956 13:57:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.956 13:57:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.956 13:57:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.956 13:57:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.956 13:57:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.956 13:57:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.956 13:57:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.956 13:57:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.956 13:57:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.956 13:57:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:51.956 13:57:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:51.956 13:57:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.956 13:57:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.956 13:57:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.956 13:57:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.956 13:57:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.956 13:57:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.956 13:57:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.956 13:57:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.956 13:57:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.956 13:57:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.956 13:57:54 -- paths/export.sh@5 -- # export PATH 00:27:51.956 13:57:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.956 13:57:54 -- nvmf/common.sh@46 -- # : 0 00:27:51.956 13:57:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:51.956 13:57:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:51.956 13:57:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:51.956 13:57:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.956 13:57:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.956 13:57:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:51.956 13:57:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:51.956 13:57:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:51.956 13:57:54 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:51.956 13:57:54 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:51.956 13:57:54 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:51.956 13:57:54 -- host/perf.sh@17 -- # nvmftestinit 00:27:51.956 13:57:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:51.956 13:57:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.956 13:57:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:51.956 13:57:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:51.956 13:57:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:51.956 13:57:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.956 13:57:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.956 13:57:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.956 13:57:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:51.956 13:57:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:51.956 13:57:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:51.956 13:57:54 -- common/autotest_common.sh@10 -- # set +x 00:27:57.273 13:57:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:57.273 13:57:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:57.273 13:57:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:57.273 13:57:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:57.273 13:57:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:57.273 13:57:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:57.273 13:57:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:57.273 13:57:59 -- nvmf/common.sh@294 -- # net_devs=() 00:27:57.273 13:57:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:57.273 13:57:59 -- nvmf/common.sh@295 -- # e810=() 00:27:57.273 13:57:59 -- nvmf/common.sh@295 -- # local -ga e810 00:27:57.273 13:57:59 -- nvmf/common.sh@296 -- # x722=() 00:27:57.273 13:57:59 -- nvmf/common.sh@296 -- # local -ga x722 00:27:57.273 13:57:59 -- nvmf/common.sh@297 -- # mlx=() 00:27:57.273 13:57:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:57.273 13:57:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.273 13:57:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.273 13:57:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.273 13:57:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.273 13:57:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.273 13:57:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.273 13:57:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.273 13:57:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.273 13:57:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.273 13:57:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.273 13:57:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.273 13:57:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:57.273 13:57:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:57.273 13:57:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:57.273 13:57:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:57.273 13:57:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:57.273 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:57.273 13:57:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:57.273 13:57:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:57.273 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:57.273 13:57:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:57.273 13:57:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:57.273 13:57:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:57.273 13:57:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.274 13:57:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:57.274 13:57:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.274 13:57:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:57.274 Found net devices under 0000:86:00.0: cvl_0_0 00:27:57.274 13:57:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.274 13:57:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:57.274 13:57:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.274 13:57:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:57.274 13:57:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.274 13:57:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:57.274 Found net devices under 0000:86:00.1: cvl_0_1 00:27:57.274 13:57:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.274 13:57:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:57.274 13:57:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:57.274 13:57:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:57.274 13:57:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:57.274 13:57:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:57.274 13:57:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.274 13:57:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.274 13:57:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.274 13:57:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:57.274 13:57:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.274 13:57:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.274 13:57:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:57.274 13:57:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.274 13:57:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.274 13:57:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:57.274 13:57:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:57.274 13:57:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.274 13:57:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.274 13:57:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.274 13:57:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.274 13:57:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:57.274 13:57:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.274 13:57:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.274 13:57:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.274 13:57:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:57.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:27:57.274 00:27:57.274 --- 10.0.0.2 ping statistics --- 00:27:57.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.274 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:27:57.274 13:57:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:27:57.274 00:27:57.274 --- 10.0.0.1 ping statistics --- 00:27:57.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.274 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:57.274 13:57:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.274 13:57:59 -- nvmf/common.sh@410 -- # return 0 00:27:57.274 13:57:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:57.274 13:57:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.274 13:57:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:57.274 13:57:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:57.274 13:57:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.274 13:57:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:57.274 13:57:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:57.274 13:57:59 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:57.274 13:57:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:57.274 13:57:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:57.274 13:57:59 -- common/autotest_common.sh@10 -- # set +x 00:27:57.274 13:57:59 -- nvmf/common.sh@469 -- # nvmfpid=1728172 00:27:57.274 13:57:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:57.274 13:57:59 -- nvmf/common.sh@470 -- # waitforlisten 1728172 00:27:57.274 13:57:59 -- common/autotest_common.sh@819 -- # '[' -z 1728172 ']' 00:27:57.274 13:57:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.274 13:57:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:57.274 13:57:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.274 13:57:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:57.274 13:57:59 -- common/autotest_common.sh@10 -- # set +x 00:27:57.274 [2024-07-11 13:57:59.471324] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:57.274 [2024-07-11 13:57:59.471367] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.274 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.274 [2024-07-11 13:57:59.533027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:57.274 [2024-07-11 13:57:59.573411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:57.274 [2024-07-11 13:57:59.573522] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.274 [2024-07-11 13:57:59.573531] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.274 [2024-07-11 13:57:59.573538] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.274 [2024-07-11 13:57:59.573582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.274 [2024-07-11 13:57:59.573698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.274 [2024-07-11 13:57:59.573715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.274 [2024-07-11 13:57:59.573718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.841 13:58:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:57.841 13:58:00 -- common/autotest_common.sh@852 -- # return 0 00:27:57.841 13:58:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:57.841 13:58:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:57.841 13:58:00 -- common/autotest_common.sh@10 -- # set +x 00:27:58.100 13:58:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.100 13:58:00 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:58.100 13:58:00 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:01.387 13:58:03 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:01.387 13:58:03 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:01.387 13:58:03 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:28:01.387 13:58:03 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:01.387 13:58:03 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:01.387 13:58:03 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:28:01.387 13:58:03 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:01.387 13:58:03 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:01.387 13:58:03 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:01.645 [2024-07-11 13:58:03.859496] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.645 13:58:03 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:01.645 13:58:04 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:01.645 13:58:04 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:01.903 13:58:04 -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:01.903 13:58:04 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:02.161 13:58:04 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.161 [2024-07-11 13:58:04.587726] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.161 13:58:04 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:02.419 13:58:04 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:28:02.419 13:58:04 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:28:02.419 13:58:04 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:02.419 13:58:04 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:28:03.795 Initializing NVMe Controllers 00:28:03.795 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:28:03.795 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:28:03.795 Initialization complete. Launching workers. 00:28:03.795 ======================================================== 00:28:03.795 Latency(us) 00:28:03.795 Device Information : IOPS MiB/s Average min max 00:28:03.795 PCIE (0000:5e:00.0) NSID 1 from core 0: 99379.33 388.20 321.36 9.48 5191.12 00:28:03.795 ======================================================== 00:28:03.795 Total : 99379.33 388.20 321.36 9.48 5191.12 00:28:03.795 00:28:03.795 13:58:06 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:03.795 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.173 Initializing NVMe Controllers 00:28:05.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:05.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:05.173 Initialization complete. Launching workers. 00:28:05.173 ======================================================== 00:28:05.173 Latency(us) 00:28:05.173 Device Information : IOPS MiB/s Average min max 00:28:05.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.64 0.39 10047.42 148.61 45630.49 00:28:05.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 57.80 0.23 17577.04 7964.78 47898.19 00:28:05.173 ======================================================== 00:28:05.173 Total : 158.44 0.62 12794.08 148.61 47898.19 00:28:05.173 00:28:05.173 13:58:07 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:05.173 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.552 Initializing NVMe Controllers 00:28:06.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:06.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:06.552 Initialization complete. Launching workers. 00:28:06.552 ======================================================== 00:28:06.552 Latency(us) 00:28:06.552 Device Information : IOPS MiB/s Average min max 00:28:06.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10665.31 41.66 3004.06 325.77 41180.59 00:28:06.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3948.83 15.43 8162.16 5913.06 15722.66 00:28:06.552 ======================================================== 00:28:06.552 Total : 14614.14 57.09 4397.81 325.77 41180.59 00:28:06.552 00:28:06.552 13:58:08 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:06.552 13:58:08 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:06.552 13:58:08 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:06.552 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.090 Initializing NVMe Controllers 00:28:09.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.090 Controller IO queue size 128, less than required. 00:28:09.090 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.090 Controller IO queue size 128, less than required. 00:28:09.090 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:09.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:09.090 Initialization complete. Launching workers. 00:28:09.090 ======================================================== 00:28:09.090 Latency(us) 00:28:09.090 Device Information : IOPS MiB/s Average min max 00:28:09.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1176.50 294.12 112339.77 66733.26 152839.20 00:28:09.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 594.00 148.50 223384.84 62369.07 333703.44 00:28:09.090 ======================================================== 00:28:09.090 Total : 1770.50 442.62 149595.22 62369.07 333703.44 00:28:09.090 00:28:09.090 13:58:11 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:09.090 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.090 No valid NVMe controllers or AIO or URING devices found 00:28:09.090 Initializing NVMe Controllers 00:28:09.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.090 Controller IO queue size 128, less than required. 00:28:09.090 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.090 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:09.090 Controller IO queue size 128, less than required. 00:28:09.090 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.090 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:09.090 WARNING: Some requested NVMe devices were skipped 00:28:09.090 13:58:11 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:09.090 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.625 Initializing NVMe Controllers 00:28:11.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.625 Controller IO queue size 128, less than required. 00:28:11.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:11.625 Controller IO queue size 128, less than required. 00:28:11.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:11.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:11.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:11.625 Initialization complete. Launching workers. 00:28:11.625 00:28:11.625 ==================== 00:28:11.625 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:11.625 TCP transport: 00:28:11.625 polls: 35412 00:28:11.625 idle_polls: 15596 00:28:11.625 sock_completions: 19816 00:28:11.625 nvme_completions: 4519 00:28:11.625 submitted_requests: 6990 00:28:11.625 queued_requests: 1 00:28:11.625 00:28:11.625 ==================== 00:28:11.625 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:11.625 TCP transport: 00:28:11.625 polls: 35539 00:28:11.625 idle_polls: 15722 00:28:11.625 sock_completions: 19817 00:28:11.625 nvme_completions: 4593 00:28:11.625 submitted_requests: 7123 00:28:11.625 queued_requests: 1 00:28:11.625 ======================================================== 00:28:11.625 Latency(us) 00:28:11.625 Device Information : IOPS MiB/s Average min max 00:28:11.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1193.50 298.37 109779.39 64678.93 173808.33 00:28:11.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1212.00 303.00 107300.77 40552.20 153470.41 00:28:11.625 ======================================================== 00:28:11.625 Total : 2405.50 601.37 108530.55 40552.20 173808.33 00:28:11.625 00:28:11.625 13:58:13 -- host/perf.sh@66 -- # sync 00:28:11.625 13:58:13 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.908 13:58:14 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:11.908 13:58:14 -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:28:11.908 13:58:14 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:15.197 13:58:17 -- host/perf.sh@72 -- # ls_guid=7440e195-3dda-4f72-b705-a513166c7d45 00:28:15.197 13:58:17 -- host/perf.sh@73 -- # get_lvs_free_mb 7440e195-3dda-4f72-b705-a513166c7d45 00:28:15.197 13:58:17 -- common/autotest_common.sh@1343 -- # local lvs_uuid=7440e195-3dda-4f72-b705-a513166c7d45 00:28:15.197 13:58:17 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:15.197 13:58:17 -- common/autotest_common.sh@1345 -- # local fc 00:28:15.197 13:58:17 -- common/autotest_common.sh@1346 -- # local cs 00:28:15.197 13:58:17 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:15.197 13:58:17 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:15.197 { 00:28:15.197 "uuid": "7440e195-3dda-4f72-b705-a513166c7d45", 00:28:15.197 "name": "lvs_0", 00:28:15.197 "base_bdev": "Nvme0n1", 00:28:15.197 "total_data_clusters": 238234, 00:28:15.197 "free_clusters": 238234, 00:28:15.197 "block_size": 512, 00:28:15.197 "cluster_size": 4194304 00:28:15.197 } 00:28:15.197 ]' 00:28:15.197 13:58:17 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="7440e195-3dda-4f72-b705-a513166c7d45") .free_clusters' 00:28:15.197 13:58:17 -- common/autotest_common.sh@1348 -- # fc=238234 00:28:15.197 13:58:17 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="7440e195-3dda-4f72-b705-a513166c7d45") .cluster_size' 00:28:15.197 13:58:17 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:15.197 13:58:17 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:28:15.197 13:58:17 -- common/autotest_common.sh@1353 -- # echo 952936 00:28:15.197 952936 00:28:15.197 13:58:17 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:15.197 13:58:17 -- host/perf.sh@78 -- # free_mb=20480 00:28:15.197 13:58:17 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7440e195-3dda-4f72-b705-a513166c7d45 lbd_0 20480 00:28:15.765 13:58:18 -- host/perf.sh@80 -- # lb_guid=66d86013-7746-4674-a1c7-87e812bffc6f 00:28:15.765 13:58:18 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 66d86013-7746-4674-a1c7-87e812bffc6f lvs_n_0 00:28:16.332 13:58:18 -- host/perf.sh@83 -- # ls_nested_guid=a6a92e32-e01f-4e7a-a967-43a8005c2ebc 00:28:16.332 13:58:18 -- host/perf.sh@84 -- # get_lvs_free_mb a6a92e32-e01f-4e7a-a967-43a8005c2ebc 00:28:16.332 13:58:18 -- common/autotest_common.sh@1343 -- # local lvs_uuid=a6a92e32-e01f-4e7a-a967-43a8005c2ebc 00:28:16.332 13:58:18 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:16.332 13:58:18 -- common/autotest_common.sh@1345 -- # local fc 00:28:16.332 13:58:18 -- common/autotest_common.sh@1346 -- # local cs 00:28:16.332 13:58:18 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:16.609 13:58:18 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:16.609 { 00:28:16.609 "uuid": "7440e195-3dda-4f72-b705-a513166c7d45", 00:28:16.609 "name": "lvs_0", 00:28:16.609 "base_bdev": "Nvme0n1", 00:28:16.609 "total_data_clusters": 238234, 00:28:16.609 "free_clusters": 233114, 00:28:16.609 "block_size": 512, 00:28:16.609 "cluster_size": 4194304 00:28:16.609 }, 00:28:16.609 { 00:28:16.609 "uuid": "a6a92e32-e01f-4e7a-a967-43a8005c2ebc", 00:28:16.609 "name": "lvs_n_0", 00:28:16.609 "base_bdev": "66d86013-7746-4674-a1c7-87e812bffc6f", 00:28:16.609 "total_data_clusters": 5114, 00:28:16.609 "free_clusters": 5114, 00:28:16.609 "block_size": 512, 00:28:16.609 "cluster_size": 4194304 00:28:16.609 } 00:28:16.609 ]' 00:28:16.609 13:58:18 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="a6a92e32-e01f-4e7a-a967-43a8005c2ebc") .free_clusters' 00:28:16.609 13:58:18 -- common/autotest_common.sh@1348 -- # fc=5114 00:28:16.609 13:58:18 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="a6a92e32-e01f-4e7a-a967-43a8005c2ebc") .cluster_size' 00:28:16.609 13:58:18 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:16.609 13:58:18 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:28:16.609 13:58:18 -- common/autotest_common.sh@1353 -- # echo 20456 00:28:16.609 20456 00:28:16.609 13:58:18 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:16.609 13:58:18 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a6a92e32-e01f-4e7a-a967-43a8005c2ebc lbd_nest_0 20456 00:28:16.868 13:58:19 -- host/perf.sh@88 -- # lb_nested_guid=b902fa95-cf6a-4878-8166-6809c2862435 00:28:16.868 13:58:19 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.868 13:58:19 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:16.868 13:58:19 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b902fa95-cf6a-4878-8166-6809c2862435 00:28:17.126 13:58:19 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.385 13:58:19 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:17.385 13:58:19 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:17.385 13:58:19 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:17.385 13:58:19 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:17.385 13:58:19 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:17.385 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.598 Initializing NVMe Controllers 00:28:29.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:29.598 Initialization complete. Launching workers. 00:28:29.598 ======================================================== 00:28:29.598 Latency(us) 00:28:29.598 Device Information : IOPS MiB/s Average min max 00:28:29.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.80 0.02 22353.98 169.72 45649.39 00:28:29.598 ======================================================== 00:28:29.598 Total : 44.80 0.02 22353.98 169.72 45649.39 00:28:29.598 00:28:29.598 13:58:30 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:29.598 13:58:30 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.598 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.670 Initializing NVMe Controllers 00:28:39.670 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:39.670 Initialization complete. Launching workers. 00:28:39.670 ======================================================== 00:28:39.670 Latency(us) 00:28:39.670 Device Information : IOPS MiB/s Average min max 00:28:39.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.50 10.56 11834.54 6034.32 18908.71 00:28:39.670 ======================================================== 00:28:39.670 Total : 84.50 10.56 11834.54 6034.32 18908.71 00:28:39.670 00:28:39.670 13:58:40 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:39.670 13:58:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:39.670 13:58:40 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:39.670 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.644 Initializing NVMe Controllers 00:28:49.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:49.644 Initialization complete. Launching workers. 00:28:49.644 ======================================================== 00:28:49.644 Latency(us) 00:28:49.644 Device Information : IOPS MiB/s Average min max 00:28:49.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8711.80 4.25 3673.58 259.85 9992.77 00:28:49.644 ======================================================== 00:28:49.644 Total : 8711.80 4.25 3673.58 259.85 9992.77 00:28:49.644 00:28:49.644 13:58:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:49.644 13:58:50 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:49.644 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.630 Initializing NVMe Controllers 00:28:59.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:59.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:59.630 Initialization complete. Launching workers. 00:28:59.630 ======================================================== 00:28:59.630 Latency(us) 00:28:59.630 Device Information : IOPS MiB/s Average min max 00:28:59.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2261.00 282.62 14162.04 777.29 50026.14 00:28:59.630 ======================================================== 00:28:59.630 Total : 2261.00 282.62 14162.04 777.29 50026.14 00:28:59.630 00:28:59.630 13:59:00 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:59.630 13:59:00 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:59.630 13:59:00 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:59.630 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.610 Initializing NVMe Controllers 00:29:09.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:09.610 Controller IO queue size 128, less than required. 00:29:09.610 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:09.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:09.610 Initialization complete. Launching workers. 00:29:09.610 ======================================================== 00:29:09.610 Latency(us) 00:29:09.610 Device Information : IOPS MiB/s Average min max 00:29:09.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15789.34 7.71 8111.79 1399.29 21540.67 00:29:09.610 ======================================================== 00:29:09.610 Total : 15789.34 7.71 8111.79 1399.29 21540.67 00:29:09.610 00:29:09.610 13:59:11 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:09.610 13:59:11 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:09.610 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.590 Initializing NVMe Controllers 00:29:19.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.590 Controller IO queue size 128, less than required. 00:29:19.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:19.590 Initialization complete. Launching workers. 00:29:19.590 ======================================================== 00:29:19.590 Latency(us) 00:29:19.590 Device Information : IOPS MiB/s Average min max 00:29:19.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1195.21 149.40 107726.97 38590.35 214517.35 00:29:19.590 ======================================================== 00:29:19.590 Total : 1195.21 149.40 107726.97 38590.35 214517.35 00:29:19.590 00:29:19.590 13:59:21 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:19.590 13:59:21 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b902fa95-cf6a-4878-8166-6809c2862435 00:29:20.159 13:59:22 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:20.419 13:59:22 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 66d86013-7746-4674-a1c7-87e812bffc6f 00:29:20.678 13:59:22 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:20.678 13:59:23 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:20.678 13:59:23 -- host/perf.sh@114 -- # nvmftestfini 00:29:20.678 13:59:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:20.678 13:59:23 -- nvmf/common.sh@116 -- # sync 00:29:20.678 13:59:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:20.678 13:59:23 -- nvmf/common.sh@119 -- # set +e 00:29:20.678 13:59:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:20.678 13:59:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:20.678 rmmod nvme_tcp 00:29:20.678 rmmod nvme_fabrics 00:29:20.678 rmmod nvme_keyring 00:29:20.945 13:59:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:20.945 13:59:23 -- nvmf/common.sh@123 -- # set -e 00:29:20.945 13:59:23 -- nvmf/common.sh@124 -- # return 0 00:29:20.945 13:59:23 -- nvmf/common.sh@477 -- # '[' -n 1728172 ']' 00:29:20.945 13:59:23 -- nvmf/common.sh@478 -- # killprocess 1728172 00:29:20.945 13:59:23 -- common/autotest_common.sh@926 -- # '[' -z 1728172 ']' 00:29:20.945 13:59:23 -- common/autotest_common.sh@930 -- # kill -0 1728172 00:29:20.945 13:59:23 -- common/autotest_common.sh@931 -- # uname 00:29:20.945 13:59:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:20.945 13:59:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1728172 00:29:20.945 13:59:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:20.945 13:59:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:20.945 13:59:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1728172' 00:29:20.945 killing process with pid 1728172 00:29:20.945 13:59:23 -- common/autotest_common.sh@945 -- # kill 1728172 00:29:20.945 13:59:23 -- common/autotest_common.sh@950 -- # wait 1728172 00:29:22.324 13:59:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:22.324 13:59:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:22.324 13:59:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:22.324 13:59:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:22.324 13:59:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:22.324 13:59:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.324 13:59:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:22.324 13:59:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.862 13:59:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:24.862 00:29:24.862 real 1m32.830s 00:29:24.862 user 5m34.765s 00:29:24.862 sys 0m14.514s 00:29:24.862 13:59:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.862 13:59:26 -- common/autotest_common.sh@10 -- # set +x 00:29:24.862 ************************************ 00:29:24.862 END TEST nvmf_perf 00:29:24.862 ************************************ 00:29:24.862 13:59:26 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:24.862 13:59:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:24.862 13:59:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.862 13:59:26 -- common/autotest_common.sh@10 -- # set +x 00:29:24.862 ************************************ 00:29:24.862 START TEST nvmf_fio_host 00:29:24.862 ************************************ 00:29:24.862 13:59:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:24.862 * Looking for test storage... 00:29:24.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:24.862 13:59:26 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.862 13:59:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.862 13:59:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.862 13:59:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.862 13:59:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.863 13:59:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.863 13:59:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.863 13:59:26 -- paths/export.sh@5 -- # export PATH 00:29:24.863 13:59:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.863 13:59:26 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.863 13:59:26 -- nvmf/common.sh@7 -- # uname -s 00:29:24.863 13:59:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.863 13:59:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.863 13:59:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.863 13:59:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.863 13:59:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.863 13:59:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.863 13:59:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.863 13:59:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.863 13:59:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.863 13:59:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.863 13:59:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:24.863 13:59:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:24.863 13:59:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.863 13:59:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.863 13:59:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.863 13:59:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.863 13:59:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.863 13:59:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.863 13:59:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.863 13:59:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.863 13:59:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.863 13:59:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.863 13:59:26 -- paths/export.sh@5 -- # export PATH 00:29:24.863 13:59:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.863 13:59:26 -- nvmf/common.sh@46 -- # : 0 00:29:24.863 13:59:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:24.863 13:59:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:24.863 13:59:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:24.863 13:59:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.863 13:59:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.863 13:59:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:24.863 13:59:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:24.863 13:59:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:24.863 13:59:26 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:24.863 13:59:26 -- host/fio.sh@14 -- # nvmftestinit 00:29:24.863 13:59:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:24.863 13:59:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.863 13:59:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:24.863 13:59:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:24.863 13:59:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:24.863 13:59:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.863 13:59:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:24.863 13:59:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.863 13:59:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:24.863 13:59:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:24.863 13:59:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:24.863 13:59:26 -- common/autotest_common.sh@10 -- # set +x 00:29:30.141 13:59:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:30.141 13:59:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:30.141 13:59:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:30.141 13:59:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:30.141 13:59:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:30.141 13:59:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:30.141 13:59:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:30.141 13:59:31 -- nvmf/common.sh@294 -- # net_devs=() 00:29:30.141 13:59:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:30.141 13:59:31 -- nvmf/common.sh@295 -- # e810=() 00:29:30.141 13:59:31 -- nvmf/common.sh@295 -- # local -ga e810 00:29:30.141 13:59:31 -- nvmf/common.sh@296 -- # x722=() 00:29:30.141 13:59:31 -- nvmf/common.sh@296 -- # local -ga x722 00:29:30.141 13:59:31 -- nvmf/common.sh@297 -- # mlx=() 00:29:30.141 13:59:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:30.141 13:59:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.141 13:59:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.141 13:59:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.141 13:59:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.141 13:59:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.141 13:59:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.141 13:59:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.141 13:59:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.141 13:59:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.141 13:59:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.141 13:59:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.141 13:59:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:30.141 13:59:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:30.141 13:59:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:30.141 13:59:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:30.141 13:59:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:30.141 13:59:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:30.141 13:59:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:30.141 13:59:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:30.141 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:30.141 13:59:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:30.141 13:59:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:30.141 13:59:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.141 13:59:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.141 13:59:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:30.141 13:59:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:30.141 13:59:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:30.141 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:30.141 13:59:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:30.141 13:59:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:30.142 13:59:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.142 13:59:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.142 13:59:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:30.142 13:59:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:30.142 13:59:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:30.142 13:59:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:30.142 13:59:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:30.142 13:59:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.142 13:59:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:30.142 13:59:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.142 13:59:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:30.142 Found net devices under 0000:86:00.0: cvl_0_0 00:29:30.142 13:59:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.142 13:59:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:30.142 13:59:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.142 13:59:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:30.142 13:59:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.142 13:59:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:30.142 Found net devices under 0000:86:00.1: cvl_0_1 00:29:30.142 13:59:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.142 13:59:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:30.142 13:59:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:30.142 13:59:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:30.142 13:59:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:30.142 13:59:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:30.142 13:59:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.142 13:59:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.142 13:59:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.142 13:59:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:30.142 13:59:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.142 13:59:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.142 13:59:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:30.142 13:59:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.142 13:59:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.142 13:59:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:30.142 13:59:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:30.142 13:59:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.142 13:59:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.142 13:59:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.142 13:59:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.142 13:59:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:30.142 13:59:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.142 13:59:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.142 13:59:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.142 13:59:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:30.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:29:30.142 00:29:30.142 --- 10.0.0.2 ping statistics --- 00:29:30.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.142 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:29:30.142 13:59:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:29:30.142 00:29:30.142 --- 10.0.0.1 ping statistics --- 00:29:30.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.142 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:29:30.142 13:59:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.142 13:59:31 -- nvmf/common.sh@410 -- # return 0 00:29:30.142 13:59:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:30.142 13:59:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.142 13:59:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:30.142 13:59:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:30.142 13:59:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.142 13:59:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:30.142 13:59:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:30.142 13:59:31 -- host/fio.sh@16 -- # [[ y != y ]] 00:29:30.142 13:59:31 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:30.142 13:59:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:30.142 13:59:31 -- common/autotest_common.sh@10 -- # set +x 00:29:30.142 13:59:31 -- host/fio.sh@24 -- # nvmfpid=1745538 00:29:30.142 13:59:31 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:30.142 13:59:31 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:30.142 13:59:31 -- host/fio.sh@28 -- # waitforlisten 1745538 00:29:30.142 13:59:31 -- common/autotest_common.sh@819 -- # '[' -z 1745538 ']' 00:29:30.142 13:59:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.142 13:59:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:30.142 13:59:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.142 13:59:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:30.142 13:59:31 -- common/autotest_common.sh@10 -- # set +x 00:29:30.142 [2024-07-11 13:59:32.011133] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:30.142 [2024-07-11 13:59:32.011178] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.142 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.142 [2024-07-11 13:59:32.068711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.142 [2024-07-11 13:59:32.106747] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:30.142 [2024-07-11 13:59:32.106861] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.142 [2024-07-11 13:59:32.106869] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.142 [2024-07-11 13:59:32.106880] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.142 [2024-07-11 13:59:32.106966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.142 [2024-07-11 13:59:32.107065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.142 [2024-07-11 13:59:32.107129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.142 [2024-07-11 13:59:32.107130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.401 13:59:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:30.401 13:59:32 -- common/autotest_common.sh@852 -- # return 0 00:29:30.401 13:59:32 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:30.660 [2024-07-11 13:59:32.962009] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.660 13:59:32 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:30.660 13:59:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:30.660 13:59:32 -- common/autotest_common.sh@10 -- # set +x 00:29:30.660 13:59:33 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:30.919 Malloc1 00:29:30.919 13:59:33 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.178 13:59:33 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:31.178 13:59:33 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.436 [2024-07-11 13:59:33.732210] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.436 13:59:33 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:31.694 13:59:33 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:31.694 13:59:33 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:31.694 13:59:33 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:31.694 13:59:33 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:31.694 13:59:33 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:31.694 13:59:33 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:31.694 13:59:33 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:31.694 13:59:33 -- common/autotest_common.sh@1320 -- # shift 00:29:31.694 13:59:33 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:31.694 13:59:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:31.694 13:59:33 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:31.694 13:59:33 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:31.694 13:59:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:31.694 13:59:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:31.694 13:59:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:31.694 13:59:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:31.694 13:59:33 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:31.694 13:59:33 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:31.694 13:59:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:31.694 13:59:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:31.694 13:59:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:31.694 13:59:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:31.694 13:59:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:31.952 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:31.952 fio-3.35 00:29:31.952 Starting 1 thread 00:29:31.952 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.484 00:29:34.484 test: (groupid=0, jobs=1): err= 0: pid=1746027: Thu Jul 11 13:59:36 2024 00:29:34.484 read: IOPS=12.3k, BW=48.0MiB/s (50.3MB/s)(96.2MiB/2005msec) 00:29:34.484 slat (nsec): min=1570, max=239275, avg=1710.06, stdev=2186.72 00:29:34.484 clat (usec): min=3635, max=9653, avg=5758.92, stdev=408.77 00:29:34.484 lat (usec): min=3646, max=9655, avg=5760.63, stdev=408.63 00:29:34.484 clat percentiles (usec): 00:29:34.484 | 1.00th=[ 4817], 5.00th=[ 5080], 10.00th=[ 5276], 20.00th=[ 5407], 00:29:34.484 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5735], 60.00th=[ 5866], 00:29:34.484 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6259], 95.00th=[ 6390], 00:29:34.484 | 99.00th=[ 6718], 99.50th=[ 6783], 99.90th=[ 7767], 99.95th=[ 8979], 00:29:34.484 | 99.99th=[ 9634] 00:29:34.484 bw ( KiB/s): min=47712, max=49848, per=99.97%, avg=49136.00, stdev=990.04, samples=4 00:29:34.484 iops : min=11928, max=12462, avg=12284.00, stdev=247.51, samples=4 00:29:34.484 write: IOPS=12.3k, BW=47.9MiB/s (50.2MB/s)(96.0MiB/2005msec); 0 zone resets 00:29:34.484 slat (nsec): min=1630, max=156952, avg=1794.41, stdev=1204.41 00:29:34.484 clat (usec): min=2309, max=9495, avg=4600.42, stdev=358.20 00:29:34.484 lat (usec): min=2324, max=9496, avg=4602.21, stdev=358.08 00:29:34.484 clat percentiles (usec): 00:29:34.484 | 1.00th=[ 3752], 5.00th=[ 4047], 10.00th=[ 4178], 20.00th=[ 4359], 00:29:34.484 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4621], 60.00th=[ 4686], 00:29:34.484 | 70.00th=[ 4752], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5145], 00:29:34.484 | 99.00th=[ 5342], 99.50th=[ 5473], 99.90th=[ 7111], 99.95th=[ 8225], 00:29:34.484 | 99.99th=[ 9503] 00:29:34.484 bw ( KiB/s): min=48344, max=49640, per=100.00%, avg=49018.00, stdev=530.19, samples=4 00:29:34.484 iops : min=12086, max=12410, avg=12254.50, stdev=132.55, samples=4 00:29:34.484 lat (msec) : 4=1.88%, 10=98.12% 00:29:34.484 cpu : usr=68.66%, sys=27.84%, ctx=104, majf=0, minf=4 00:29:34.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:34.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:34.484 issued rwts: total=24638,24566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:34.484 00:29:34.484 Run status group 0 (all jobs): 00:29:34.484 READ: bw=48.0MiB/s (50.3MB/s), 48.0MiB/s-48.0MiB/s (50.3MB/s-50.3MB/s), io=96.2MiB (101MB), run=2005-2005msec 00:29:34.484 WRITE: bw=47.9MiB/s (50.2MB/s), 47.9MiB/s-47.9MiB/s (50.2MB/s-50.2MB/s), io=96.0MiB (101MB), run=2005-2005msec 00:29:34.484 13:59:36 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:34.484 13:59:36 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:34.484 13:59:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:34.484 13:59:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:34.484 13:59:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:34.484 13:59:36 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:34.484 13:59:36 -- common/autotest_common.sh@1320 -- # shift 00:29:34.484 13:59:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:34.484 13:59:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:34.484 13:59:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:34.485 13:59:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:34.485 13:59:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:34.485 13:59:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:34.485 13:59:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:34.485 13:59:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:34.485 13:59:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:34.485 13:59:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:34.485 13:59:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:34.485 13:59:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:34.485 13:59:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:34.485 13:59:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:34.485 13:59:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:34.485 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:34.485 fio-3.35 00:29:34.485 Starting 1 thread 00:29:34.743 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.389 00:29:37.389 test: (groupid=0, jobs=1): err= 0: pid=1746513: Thu Jul 11 13:59:39 2024 00:29:37.389 read: IOPS=10.9k, BW=170MiB/s (178MB/s)(341MiB/2007msec) 00:29:37.389 slat (nsec): min=2542, max=88897, avg=2848.40, stdev=1275.65 00:29:37.389 clat (usec): min=1960, max=14245, avg=6961.40, stdev=1722.57 00:29:37.389 lat (usec): min=1963, max=14248, avg=6964.24, stdev=1722.69 00:29:37.389 clat percentiles (usec): 00:29:37.389 | 1.00th=[ 3687], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5407], 00:29:37.389 | 30.00th=[ 5866], 40.00th=[ 6390], 50.00th=[ 6915], 60.00th=[ 7373], 00:29:37.389 | 70.00th=[ 7832], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[ 9765], 00:29:37.389 | 99.00th=[11863], 99.50th=[12387], 99.90th=[13698], 99.95th=[13960], 00:29:37.389 | 99.99th=[14222] 00:29:37.389 bw ( KiB/s): min=77888, max=97280, per=50.69%, avg=88120.00, stdev=8214.73, samples=4 00:29:37.389 iops : min= 4868, max= 6080, avg=5507.50, stdev=513.42, samples=4 00:29:37.389 write: IOPS=6580, BW=103MiB/s (108MB/s)(180MiB/1751msec); 0 zone resets 00:29:37.389 slat (usec): min=29, max=252, avg=31.77, stdev= 4.54 00:29:37.389 clat (usec): min=2793, max=13601, avg=8229.04, stdev=1386.93 00:29:37.389 lat (usec): min=2825, max=13632, avg=8260.81, stdev=1387.34 00:29:37.389 clat percentiles (usec): 00:29:37.389 | 1.00th=[ 5669], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 7111], 00:29:37.389 | 30.00th=[ 7439], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8356], 00:29:37.389 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10159], 95.00th=[10814], 00:29:37.389 | 99.00th=[12125], 99.50th=[12518], 99.90th=[13042], 99.95th=[13173], 00:29:37.389 | 99.99th=[13566] 00:29:37.389 bw ( KiB/s): min=82336, max=101376, per=87.30%, avg=91920.00, stdev=7953.98, samples=4 00:29:37.389 iops : min= 5146, max= 6336, avg=5745.00, stdev=497.12, samples=4 00:29:37.390 lat (msec) : 2=0.01%, 4=1.49%, 10=92.14%, 20=6.38% 00:29:37.390 cpu : usr=86.59%, sys=11.96%, ctx=29, majf=0, minf=1 00:29:37.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:37.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:37.390 issued rwts: total=21808,11523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:37.390 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:37.390 00:29:37.390 Run status group 0 (all jobs): 00:29:37.390 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=341MiB (357MB), run=2007-2007msec 00:29:37.390 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=180MiB (189MB), run=1751-1751msec 00:29:37.390 13:59:39 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.390 13:59:39 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:37.390 13:59:39 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:37.390 13:59:39 -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:37.390 13:59:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:37.390 13:59:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:37.390 13:59:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:37.390 13:59:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:37.390 13:59:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:37.390 13:59:39 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:37.390 13:59:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:29:37.390 13:59:39 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:29:40.681 Nvme0n1 00:29:40.681 13:59:42 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:43.216 13:59:45 -- host/fio.sh@53 -- # ls_guid=1ec39548-34df-468d-b2a9-c1ada1700485 00:29:43.216 13:59:45 -- host/fio.sh@54 -- # get_lvs_free_mb 1ec39548-34df-468d-b2a9-c1ada1700485 00:29:43.216 13:59:45 -- common/autotest_common.sh@1343 -- # local lvs_uuid=1ec39548-34df-468d-b2a9-c1ada1700485 00:29:43.216 13:59:45 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:43.216 13:59:45 -- common/autotest_common.sh@1345 -- # local fc 00:29:43.216 13:59:45 -- common/autotest_common.sh@1346 -- # local cs 00:29:43.216 13:59:45 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:43.216 13:59:45 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:43.216 { 00:29:43.216 "uuid": "1ec39548-34df-468d-b2a9-c1ada1700485", 00:29:43.216 "name": "lvs_0", 00:29:43.216 "base_bdev": "Nvme0n1", 00:29:43.216 "total_data_clusters": 930, 00:29:43.216 "free_clusters": 930, 00:29:43.216 "block_size": 512, 00:29:43.216 "cluster_size": 1073741824 00:29:43.216 } 00:29:43.216 ]' 00:29:43.216 13:59:45 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="1ec39548-34df-468d-b2a9-c1ada1700485") .free_clusters' 00:29:43.216 13:59:45 -- common/autotest_common.sh@1348 -- # fc=930 00:29:43.216 13:59:45 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="1ec39548-34df-468d-b2a9-c1ada1700485") .cluster_size' 00:29:43.216 13:59:45 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:29:43.216 13:59:45 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:29:43.216 13:59:45 -- common/autotest_common.sh@1353 -- # echo 952320 00:29:43.216 952320 00:29:43.216 13:59:45 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:43.783 e3ab3647-36ba-4ac9-b9c7-9b8cd69fa987 00:29:43.783 13:59:45 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:43.783 13:59:46 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:44.042 13:59:46 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:44.322 13:59:46 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:44.322 13:59:46 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:44.322 13:59:46 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:44.322 13:59:46 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:44.322 13:59:46 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:44.322 13:59:46 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:44.322 13:59:46 -- common/autotest_common.sh@1320 -- # shift 00:29:44.322 13:59:46 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:44.322 13:59:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.322 13:59:46 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:44.322 13:59:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:44.322 13:59:46 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:44.322 13:59:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:44.322 13:59:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:44.322 13:59:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.322 13:59:46 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:44.322 13:59:46 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:44.322 13:59:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:44.322 13:59:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:44.322 13:59:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:44.322 13:59:46 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:44.322 13:59:46 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:44.589 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:44.589 fio-3.35 00:29:44.589 Starting 1 thread 00:29:44.589 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.124 00:29:47.125 test: (groupid=0, jobs=1): err= 0: pid=1748289: Thu Jul 11 13:59:49 2024 00:29:47.125 read: IOPS=8392, BW=32.8MiB/s (34.4MB/s)(65.7MiB/2005msec) 00:29:47.125 slat (nsec): min=1608, max=116584, avg=1759.91, stdev=1327.53 00:29:47.125 clat (usec): min=629, max=170190, avg=8441.18, stdev=10094.44 00:29:47.125 lat (usec): min=631, max=170209, avg=8442.94, stdev=10094.67 00:29:47.125 clat percentiles (msec): 00:29:47.125 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:29:47.125 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:29:47.125 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:29:47.125 | 99.00th=[ 10], 99.50th=[ 12], 99.90th=[ 169], 99.95th=[ 171], 00:29:47.125 | 99.99th=[ 171] 00:29:47.125 bw ( KiB/s): min=23968, max=36896, per=99.79%, avg=33498.00, stdev=6360.76, samples=4 00:29:47.125 iops : min= 5992, max= 9224, avg=8374.50, stdev=1590.19, samples=4 00:29:47.125 write: IOPS=8386, BW=32.8MiB/s (34.3MB/s)(65.7MiB/2005msec); 0 zone resets 00:29:47.125 slat (nsec): min=1663, max=158887, avg=1838.89, stdev=1287.76 00:29:47.125 clat (usec): min=184, max=168346, avg=6756.16, stdev=9412.75 00:29:47.125 lat (usec): min=186, max=168353, avg=6758.00, stdev=9413.04 00:29:47.125 clat percentiles (msec): 00:29:47.125 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:29:47.125 | 30.00th=[ 6], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:29:47.125 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 8], 00:29:47.125 | 99.00th=[ 8], 99.50th=[ 8], 99.90th=[ 169], 99.95th=[ 169], 00:29:47.125 | 99.99th=[ 169] 00:29:47.125 bw ( KiB/s): min=25064, max=36608, per=99.96%, avg=33530.00, stdev=5646.90, samples=4 00:29:47.125 iops : min= 6266, max= 9152, avg=8382.50, stdev=1411.73, samples=4 00:29:47.125 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:29:47.125 lat (msec) : 2=0.06%, 4=0.23%, 10=99.19%, 20=0.11%, 250=0.38% 00:29:47.125 cpu : usr=67.32%, sys=30.09%, ctx=101, majf=0, minf=4 00:29:47.125 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:47.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:47.125 issued rwts: total=16827,16814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.125 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:47.125 00:29:47.125 Run status group 0 (all jobs): 00:29:47.125 READ: bw=32.8MiB/s (34.4MB/s), 32.8MiB/s-32.8MiB/s (34.4MB/s-34.4MB/s), io=65.7MiB (68.9MB), run=2005-2005msec 00:29:47.125 WRITE: bw=32.8MiB/s (34.3MB/s), 32.8MiB/s-32.8MiB/s (34.3MB/s-34.3MB/s), io=65.7MiB (68.9MB), run=2005-2005msec 00:29:47.125 13:59:49 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:47.125 13:59:49 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:48.061 13:59:50 -- host/fio.sh@64 -- # ls_nested_guid=d96f1bf5-b161-44bf-8059-1dc3eedc5530 00:29:48.061 13:59:50 -- host/fio.sh@65 -- # get_lvs_free_mb d96f1bf5-b161-44bf-8059-1dc3eedc5530 00:29:48.061 13:59:50 -- common/autotest_common.sh@1343 -- # local lvs_uuid=d96f1bf5-b161-44bf-8059-1dc3eedc5530 00:29:48.061 13:59:50 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:48.061 13:59:50 -- common/autotest_common.sh@1345 -- # local fc 00:29:48.061 13:59:50 -- common/autotest_common.sh@1346 -- # local cs 00:29:48.061 13:59:50 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:48.320 13:59:50 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:48.320 { 00:29:48.320 "uuid": "1ec39548-34df-468d-b2a9-c1ada1700485", 00:29:48.320 "name": "lvs_0", 00:29:48.320 "base_bdev": "Nvme0n1", 00:29:48.320 "total_data_clusters": 930, 00:29:48.320 "free_clusters": 0, 00:29:48.320 "block_size": 512, 00:29:48.320 "cluster_size": 1073741824 00:29:48.320 }, 00:29:48.320 { 00:29:48.320 "uuid": "d96f1bf5-b161-44bf-8059-1dc3eedc5530", 00:29:48.320 "name": "lvs_n_0", 00:29:48.320 "base_bdev": "e3ab3647-36ba-4ac9-b9c7-9b8cd69fa987", 00:29:48.320 "total_data_clusters": 237847, 00:29:48.320 "free_clusters": 237847, 00:29:48.320 "block_size": 512, 00:29:48.320 "cluster_size": 4194304 00:29:48.320 } 00:29:48.320 ]' 00:29:48.320 13:59:50 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="d96f1bf5-b161-44bf-8059-1dc3eedc5530") .free_clusters' 00:29:48.320 13:59:50 -- common/autotest_common.sh@1348 -- # fc=237847 00:29:48.320 13:59:50 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="d96f1bf5-b161-44bf-8059-1dc3eedc5530") .cluster_size' 00:29:48.320 13:59:50 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:48.320 13:59:50 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:29:48.320 13:59:50 -- common/autotest_common.sh@1353 -- # echo 951388 00:29:48.320 951388 00:29:48.320 13:59:50 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:48.888 9154c367-91a5-4ef3-abcc-93ef5336e68b 00:29:48.888 13:59:51 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:49.148 13:59:51 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:49.407 13:59:51 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:49.407 13:59:51 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.407 13:59:51 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.408 13:59:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:49.408 13:59:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:49.408 13:59:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:49.408 13:59:51 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.408 13:59:51 -- common/autotest_common.sh@1320 -- # shift 00:29:49.408 13:59:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:49.408 13:59:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:49.408 13:59:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.408 13:59:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:49.408 13:59:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:49.408 13:59:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:49.408 13:59:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:49.408 13:59:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:49.408 13:59:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.408 13:59:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:49.408 13:59:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:49.665 13:59:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:49.665 13:59:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:49.665 13:59:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:49.665 13:59:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.924 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:49.924 fio-3.35 00:29:49.924 Starting 1 thread 00:29:49.924 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.457 00:29:52.457 test: (groupid=0, jobs=1): err= 0: pid=1749349: Thu Jul 11 13:59:54 2024 00:29:52.457 read: IOPS=8008, BW=31.3MiB/s (32.8MB/s)(62.8MiB/2007msec) 00:29:52.457 slat (nsec): min=1544, max=108038, avg=1692.83, stdev=1187.56 00:29:52.457 clat (usec): min=3075, max=15501, avg=8870.62, stdev=729.51 00:29:52.457 lat (usec): min=3078, max=15503, avg=8872.31, stdev=729.47 00:29:52.457 clat percentiles (usec): 00:29:52.457 | 1.00th=[ 7242], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8291], 00:29:52.457 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:29:52.457 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:29:52.457 | 99.00th=[10421], 99.50th=[10683], 99.90th=[12125], 99.95th=[13566], 00:29:52.457 | 99.99th=[15533] 00:29:52.457 bw ( KiB/s): min=30728, max=32544, per=99.85%, avg=31986.00, stdev=845.01, samples=4 00:29:52.457 iops : min= 7682, max= 8136, avg=7996.50, stdev=211.25, samples=4 00:29:52.457 write: IOPS=7976, BW=31.2MiB/s (32.7MB/s)(62.5MiB/2007msec); 0 zone resets 00:29:52.457 slat (nsec): min=1594, max=89566, avg=1770.95, stdev=830.84 00:29:52.457 clat (usec): min=1466, max=13588, avg=7038.91, stdev=630.38 00:29:52.457 lat (usec): min=1470, max=13590, avg=7040.68, stdev=630.35 00:29:52.457 clat percentiles (usec): 00:29:52.457 | 1.00th=[ 5538], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6521], 00:29:52.457 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:29:52.457 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7767], 95.00th=[ 8029], 00:29:52.457 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[10028], 99.95th=[10814], 00:29:52.457 | 99.99th=[13566] 00:29:52.457 bw ( KiB/s): min=31760, max=32064, per=100.00%, avg=31924.00, stdev=127.58, samples=4 00:29:52.457 iops : min= 7940, max= 8016, avg=7981.00, stdev=31.90, samples=4 00:29:52.457 lat (msec) : 2=0.01%, 4=0.11%, 10=97.26%, 20=2.63% 00:29:52.457 cpu : usr=67.10%, sys=30.31%, ctx=151, majf=0, minf=4 00:29:52.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:52.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:52.457 issued rwts: total=16073,16009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:52.457 00:29:52.457 Run status group 0 (all jobs): 00:29:52.457 READ: bw=31.3MiB/s (32.8MB/s), 31.3MiB/s-31.3MiB/s (32.8MB/s-32.8MB/s), io=62.8MiB (65.8MB), run=2007-2007msec 00:29:52.457 WRITE: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=62.5MiB (65.6MB), run=2007-2007msec 00:29:52.457 13:59:54 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:52.457 13:59:54 -- host/fio.sh@74 -- # sync 00:29:52.457 13:59:54 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:56.666 13:59:58 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:56.666 13:59:58 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:59.200 14:00:01 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:59.200 14:00:01 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:01.109 14:00:03 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:01.109 14:00:03 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:01.109 14:00:03 -- host/fio.sh@86 -- # nvmftestfini 00:30:01.109 14:00:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:01.109 14:00:03 -- nvmf/common.sh@116 -- # sync 00:30:01.109 14:00:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:01.109 14:00:03 -- nvmf/common.sh@119 -- # set +e 00:30:01.109 14:00:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:01.109 14:00:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:01.109 rmmod nvme_tcp 00:30:01.109 rmmod nvme_fabrics 00:30:01.109 rmmod nvme_keyring 00:30:01.109 14:00:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:01.109 14:00:03 -- nvmf/common.sh@123 -- # set -e 00:30:01.109 14:00:03 -- nvmf/common.sh@124 -- # return 0 00:30:01.109 14:00:03 -- nvmf/common.sh@477 -- # '[' -n 1745538 ']' 00:30:01.109 14:00:03 -- nvmf/common.sh@478 -- # killprocess 1745538 00:30:01.109 14:00:03 -- common/autotest_common.sh@926 -- # '[' -z 1745538 ']' 00:30:01.109 14:00:03 -- common/autotest_common.sh@930 -- # kill -0 1745538 00:30:01.109 14:00:03 -- common/autotest_common.sh@931 -- # uname 00:30:01.109 14:00:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:01.109 14:00:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1745538 00:30:01.109 14:00:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:01.109 14:00:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:01.109 14:00:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1745538' 00:30:01.109 killing process with pid 1745538 00:30:01.109 14:00:03 -- common/autotest_common.sh@945 -- # kill 1745538 00:30:01.109 14:00:03 -- common/autotest_common.sh@950 -- # wait 1745538 00:30:01.367 14:00:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:01.367 14:00:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:01.367 14:00:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:01.367 14:00:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:01.367 14:00:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:01.367 14:00:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.367 14:00:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:01.367 14:00:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.268 14:00:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:03.268 00:30:03.268 real 0m38.936s 00:30:03.268 user 2m37.715s 00:30:03.268 sys 0m8.316s 00:30:03.268 14:00:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:03.268 14:00:05 -- common/autotest_common.sh@10 -- # set +x 00:30:03.268 ************************************ 00:30:03.268 END TEST nvmf_fio_host 00:30:03.268 ************************************ 00:30:03.527 14:00:05 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:03.527 14:00:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:03.527 14:00:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:03.527 14:00:05 -- common/autotest_common.sh@10 -- # set +x 00:30:03.527 ************************************ 00:30:03.527 START TEST nvmf_failover 00:30:03.527 ************************************ 00:30:03.527 14:00:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:03.527 * Looking for test storage... 00:30:03.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:03.527 14:00:05 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.527 14:00:05 -- nvmf/common.sh@7 -- # uname -s 00:30:03.527 14:00:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.527 14:00:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.527 14:00:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.527 14:00:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.527 14:00:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.527 14:00:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.527 14:00:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.527 14:00:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.527 14:00:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.527 14:00:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.527 14:00:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:03.527 14:00:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:03.527 14:00:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.527 14:00:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.527 14:00:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.527 14:00:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.527 14:00:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.527 14:00:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.527 14:00:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.527 14:00:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.527 14:00:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.527 14:00:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.527 14:00:05 -- paths/export.sh@5 -- # export PATH 00:30:03.527 14:00:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.527 14:00:05 -- nvmf/common.sh@46 -- # : 0 00:30:03.527 14:00:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:03.527 14:00:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:03.527 14:00:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:03.527 14:00:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.527 14:00:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.527 14:00:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:03.527 14:00:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:03.527 14:00:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:03.527 14:00:05 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:03.527 14:00:05 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:03.527 14:00:05 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:03.527 14:00:05 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:03.527 14:00:05 -- host/failover.sh@18 -- # nvmftestinit 00:30:03.527 14:00:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:03.527 14:00:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.527 14:00:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:03.527 14:00:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:03.527 14:00:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:03.527 14:00:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.527 14:00:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:03.527 14:00:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.527 14:00:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:03.527 14:00:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:03.527 14:00:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:03.527 14:00:05 -- common/autotest_common.sh@10 -- # set +x 00:30:08.795 14:00:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:08.795 14:00:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:08.795 14:00:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:08.795 14:00:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:08.795 14:00:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:08.795 14:00:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:08.795 14:00:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:08.795 14:00:11 -- nvmf/common.sh@294 -- # net_devs=() 00:30:08.795 14:00:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:08.795 14:00:11 -- nvmf/common.sh@295 -- # e810=() 00:30:08.795 14:00:11 -- nvmf/common.sh@295 -- # local -ga e810 00:30:08.795 14:00:11 -- nvmf/common.sh@296 -- # x722=() 00:30:08.795 14:00:11 -- nvmf/common.sh@296 -- # local -ga x722 00:30:08.795 14:00:11 -- nvmf/common.sh@297 -- # mlx=() 00:30:08.795 14:00:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:08.795 14:00:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.795 14:00:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.795 14:00:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.795 14:00:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.795 14:00:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.795 14:00:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.795 14:00:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.795 14:00:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.795 14:00:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.795 14:00:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.795 14:00:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.795 14:00:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:08.795 14:00:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:08.795 14:00:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:08.795 14:00:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:08.795 14:00:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:08.795 14:00:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:08.795 14:00:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:08.795 14:00:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:08.796 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:08.796 14:00:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:08.796 14:00:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:08.796 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:08.796 14:00:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:08.796 14:00:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:08.796 14:00:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.796 14:00:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:08.796 14:00:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.796 14:00:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:08.796 Found net devices under 0000:86:00.0: cvl_0_0 00:30:08.796 14:00:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.796 14:00:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:08.796 14:00:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.796 14:00:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:08.796 14:00:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.796 14:00:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:08.796 Found net devices under 0000:86:00.1: cvl_0_1 00:30:08.796 14:00:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.796 14:00:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:08.796 14:00:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:08.796 14:00:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:08.796 14:00:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:08.796 14:00:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.796 14:00:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.796 14:00:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.796 14:00:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:08.796 14:00:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.796 14:00:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.796 14:00:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:08.796 14:00:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.796 14:00:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.796 14:00:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:08.796 14:00:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:08.796 14:00:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.796 14:00:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.796 14:00:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.796 14:00:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.796 14:00:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:08.796 14:00:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.054 14:00:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.054 14:00:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.054 14:00:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:09.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:30:09.054 00:30:09.054 --- 10.0.0.2 ping statistics --- 00:30:09.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.054 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:30:09.054 14:00:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:30:09.054 00:30:09.054 --- 10.0.0.1 ping statistics --- 00:30:09.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.054 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:30:09.054 14:00:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.054 14:00:11 -- nvmf/common.sh@410 -- # return 0 00:30:09.054 14:00:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:09.054 14:00:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.054 14:00:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:09.054 14:00:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:09.054 14:00:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.054 14:00:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:09.054 14:00:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:09.054 14:00:11 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:09.054 14:00:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:09.054 14:00:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:09.054 14:00:11 -- common/autotest_common.sh@10 -- # set +x 00:30:09.054 14:00:11 -- nvmf/common.sh@469 -- # nvmfpid=1755042 00:30:09.055 14:00:11 -- nvmf/common.sh@470 -- # waitforlisten 1755042 00:30:09.055 14:00:11 -- common/autotest_common.sh@819 -- # '[' -z 1755042 ']' 00:30:09.055 14:00:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.055 14:00:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:09.055 14:00:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.055 14:00:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:09.055 14:00:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:09.055 14:00:11 -- common/autotest_common.sh@10 -- # set +x 00:30:09.055 [2024-07-11 14:00:11.418545] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:09.055 [2024-07-11 14:00:11.418589] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.055 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.055 [2024-07-11 14:00:11.475763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:09.312 [2024-07-11 14:00:11.514757] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:09.312 [2024-07-11 14:00:11.514866] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.312 [2024-07-11 14:00:11.514875] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.312 [2024-07-11 14:00:11.514882] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.312 [2024-07-11 14:00:11.514978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.312 [2024-07-11 14:00:11.515067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:09.312 [2024-07-11 14:00:11.515069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.879 14:00:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:09.879 14:00:12 -- common/autotest_common.sh@852 -- # return 0 00:30:09.879 14:00:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:09.879 14:00:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:09.879 14:00:12 -- common/autotest_common.sh@10 -- # set +x 00:30:09.879 14:00:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.879 14:00:12 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:10.137 [2024-07-11 14:00:12.414553] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.137 14:00:12 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:10.398 Malloc0 00:30:10.398 14:00:12 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:10.398 14:00:12 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:10.655 14:00:12 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:10.913 [2024-07-11 14:00:13.154766] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.913 14:00:13 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:10.913 [2024-07-11 14:00:13.331292] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:10.913 14:00:13 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:11.173 [2024-07-11 14:00:13.511870] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:11.173 14:00:13 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:11.173 14:00:13 -- host/failover.sh@31 -- # bdevperf_pid=1755520 00:30:11.173 14:00:13 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:11.173 14:00:13 -- host/failover.sh@34 -- # waitforlisten 1755520 /var/tmp/bdevperf.sock 00:30:11.173 14:00:13 -- common/autotest_common.sh@819 -- # '[' -z 1755520 ']' 00:30:11.173 14:00:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:11.173 14:00:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:11.173 14:00:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:11.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:11.173 14:00:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:11.173 14:00:13 -- common/autotest_common.sh@10 -- # set +x 00:30:12.106 14:00:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:12.106 14:00:14 -- common/autotest_common.sh@852 -- # return 0 00:30:12.106 14:00:14 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:12.366 NVMe0n1 00:30:12.366 14:00:14 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:12.625 00:30:12.625 14:00:14 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:12.625 14:00:14 -- host/failover.sh@39 -- # run_test_pid=1755757 00:30:12.625 14:00:14 -- host/failover.sh@41 -- # sleep 1 00:30:13.559 14:00:15 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:13.818 [2024-07-11 14:00:16.151217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.818 [2024-07-11 14:00:16.151421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 [2024-07-11 14:00:16.151578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5600 is same with the state(5) to be set 00:30:13.819 14:00:16 -- host/failover.sh@45 -- # sleep 3 00:30:17.117 14:00:19 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:17.117 00:30:17.117 14:00:19 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:17.376 [2024-07-11 14:00:19.719020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.376 [2024-07-11 14:00:19.719058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.376 [2024-07-11 14:00:19.719066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.376 [2024-07-11 14:00:19.719072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.376 [2024-07-11 14:00:19.719078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.376 [2024-07-11 14:00:19.719084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 [2024-07-11 14:00:19.719329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab5df0 is same with the state(5) to be set 00:30:17.377 14:00:19 -- host/failover.sh@50 -- # sleep 3 00:30:20.669 14:00:22 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.669 [2024-07-11 14:00:22.904891] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.669 14:00:22 -- host/failover.sh@55 -- # sleep 1 00:30:21.606 14:00:23 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:21.866 [2024-07-11 14:00:24.095304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.866 [2024-07-11 14:00:24.095505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 [2024-07-11 14:00:24.095681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85f5c0 is same with the state(5) to be set 00:30:21.867 14:00:24 -- host/failover.sh@59 -- # wait 1755757 00:30:28.483 0 00:30:28.483 14:00:30 -- host/failover.sh@61 -- # killprocess 1755520 00:30:28.483 14:00:30 -- common/autotest_common.sh@926 -- # '[' -z 1755520 ']' 00:30:28.483 14:00:30 -- common/autotest_common.sh@930 -- # kill -0 1755520 00:30:28.483 14:00:30 -- common/autotest_common.sh@931 -- # uname 00:30:28.483 14:00:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:28.483 14:00:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1755520 00:30:28.483 14:00:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:28.483 14:00:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:28.483 14:00:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1755520' 00:30:28.484 killing process with pid 1755520 00:30:28.484 14:00:30 -- common/autotest_common.sh@945 -- # kill 1755520 00:30:28.484 14:00:30 -- common/autotest_common.sh@950 -- # wait 1755520 00:30:28.484 14:00:30 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:28.484 [2024-07-11 14:00:13.566900] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:28.484 [2024-07-11 14:00:13.566951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755520 ] 00:30:28.484 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.484 [2024-07-11 14:00:13.621262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.484 [2024-07-11 14:00:13.659331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.484 Running I/O for 15 seconds... 00:30:28.484 [2024-07-11 14:00:16.151937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.151975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.151991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.484 [2024-07-11 14:00:16.152462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.484 [2024-07-11 14:00:16.152470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.485 [2024-07-11 14:00:16.152478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.485 [2024-07-11 14:00:16.152492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.485 [2024-07-11 14:00:16.152508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.485 [2024-07-11 14:00:16.152654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.485 [2024-07-11 14:00:16.152668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.485 [2024-07-11 14:00:16.152682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.485 [2024-07-11 14:00:16.152744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.485 [2024-07-11 14:00:16.152904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.485 [2024-07-11 14:00:16.152918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.485 [2024-07-11 14:00:16.152962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.485 [2024-07-11 14:00:16.152977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.152986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.485 [2024-07-11 14:00:16.152993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.485 [2024-07-11 14:00:16.153000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.485 [2024-07-11 14:00:16.153006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.486 [2024-07-11 14:00:16.153066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.486 [2024-07-11 14:00:16.153080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.486 [2024-07-11 14:00:16.153097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.486 [2024-07-11 14:00:16.153111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.486 [2024-07-11 14:00:16.153275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.486 [2024-07-11 14:00:16.153306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.486 [2024-07-11 14:00:16.153363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.486 [2024-07-11 14:00:16.153421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.486 [2024-07-11 14:00:16.153435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.486 [2024-07-11 14:00:16.153583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.486 [2024-07-11 14:00:16.153597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.486 [2024-07-11 14:00:16.153626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.486 [2024-07-11 14:00:16.153634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:16.153642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.487 [2024-07-11 14:00:16.153657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.487 [2024-07-11 14:00:16.153671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:16.153685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:16.153699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:16.153713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.487 [2024-07-11 14:00:16.153727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.487 [2024-07-11 14:00:16.153742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:16.153756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:16.153771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:16.153786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:16.153800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:16.153814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:16.153830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:16.153845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:16.153859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2335060 is same with the state(5) to be set 00:30:28.487 [2024-07-11 14:00:16.153874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:28.487 [2024-07-11 14:00:16.153880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:28.487 [2024-07-11 14:00:16.153887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15408 len:8 PRP1 0x0 PRP2 0x0 00:30:28.487 [2024-07-11 14:00:16.153893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153933] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2335060 was disconnected and freed. reset controller. 00:30:28.487 [2024-07-11 14:00:16.153946] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:28.487 [2024-07-11 14:00:16.153967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.487 [2024-07-11 14:00:16.153975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.487 [2024-07-11 14:00:16.153988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.153995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.487 [2024-07-11 14:00:16.154001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.154008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.487 [2024-07-11 14:00:16.154014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:16.154020] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:28.487 [2024-07-11 14:00:16.155882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:28.487 [2024-07-11 14:00:16.155906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23164d0 (9): Bad file descriptor 00:30:28.487 [2024-07-11 14:00:16.212903] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:28.487 [2024-07-11 14:00:19.718694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.487 [2024-07-11 14:00:19.718737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.718747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.487 [2024-07-11 14:00:19.718759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.718766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.487 [2024-07-11 14:00:19.718772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.718780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.487 [2024-07-11 14:00:19.718786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.718793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23164d0 is same with the state(5) to be set 00:30:28.487 [2024-07-11 14:00:19.719450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.487 [2024-07-11 14:00:19.719652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.487 [2024-07-11 14:00:19.719661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.719969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.488 [2024-07-11 14:00:19.719985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.719993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.488 [2024-07-11 14:00:19.720000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.720007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.488 [2024-07-11 14:00:19.720014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.720022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.488 [2024-07-11 14:00:19.720029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.720040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.488 [2024-07-11 14:00:19.720048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.720056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.720062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.720071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.720077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.720085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.720091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.720099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.720106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.720114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.720121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.720129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.720136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.720144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.488 [2024-07-11 14:00:19.720150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.488 [2024-07-11 14:00:19.720164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.489 [2024-07-11 14:00:19.720779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.489 [2024-07-11 14:00:19.720787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.489 [2024-07-11 14:00:19.720793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.490 [2024-07-11 14:00:19.720807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.490 [2024-07-11 14:00:19.720822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.720836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.720851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.490 [2024-07-11 14:00:19.720865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.720880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.490 [2024-07-11 14:00:19.720895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.490 [2024-07-11 14:00:19.720910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.490 [2024-07-11 14:00:19.720925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.720941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.720956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.720970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.490 [2024-07-11 14:00:19.720985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.720993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.720999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.490 [2024-07-11 14:00:19.721043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.490 [2024-07-11 14:00:19.721210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.490 [2024-07-11 14:00:19.721253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.490 [2024-07-11 14:00:19.721355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23229f0 is same with the state(5) to be set 00:30:28.490 [2024-07-11 14:00:19.721371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:28.490 [2024-07-11 14:00:19.721376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:28.490 [2024-07-11 14:00:19.721382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125288 len:8 PRP1 0x0 PRP2 0x0 00:30:28.490 [2024-07-11 14:00:19.721388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.490 [2024-07-11 14:00:19.721426] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23229f0 was disconnected and freed. reset controller. 00:30:28.490 [2024-07-11 14:00:19.721435] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:28.490 [2024-07-11 14:00:19.721442] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:28.490 [2024-07-11 14:00:19.723309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:28.490 [2024-07-11 14:00:19.723333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23164d0 (9): Bad file descriptor 00:30:28.490 [2024-07-11 14:00:19.870174] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:28.491 [2024-07-11 14:00:24.095910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.095948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.095964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.095971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.095980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.095987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.095995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.491 [2024-07-11 14:00:24.096431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.491 [2024-07-11 14:00:24.096552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.491 [2024-07-11 14:00:24.096568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.491 [2024-07-11 14:00:24.096587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.491 [2024-07-11 14:00:24.096595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.491 [2024-07-11 14:00:24.096601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.096617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.096763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.096794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.096808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.096823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.096851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.096865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.096880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.096895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.096987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.096995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.097001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.097009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.097015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.097024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.097031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.097038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.097045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.097053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.097059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.097067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.097073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.097081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.097088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.097096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.097102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.097110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.492 [2024-07-11 14:00:24.097117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.492 [2024-07-11 14:00:24.097124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.492 [2024-07-11 14:00:24.097131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:28.493 [2024-07-11 14:00:24.097749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.493 [2024-07-11 14:00:24.097771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.493 [2024-07-11 14:00:24.097778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.494 [2024-07-11 14:00:24.097786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.494 [2024-07-11 14:00:24.097793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.494 [2024-07-11 14:00:24.097801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.494 [2024-07-11 14:00:24.097808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.494 [2024-07-11 14:00:24.097815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.494 [2024-07-11 14:00:24.097821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.494 [2024-07-11 14:00:24.097829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.494 [2024-07-11 14:00:24.097836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.494 [2024-07-11 14:00:24.097843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.494 [2024-07-11 14:00:24.097850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.494 [2024-07-11 14:00:24.097857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2339110 is same with the state(5) to be set 00:30:28.494 [2024-07-11 14:00:24.097865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:28.494 [2024-07-11 14:00:24.097870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:28.494 [2024-07-11 14:00:24.097877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125728 len:8 PRP1 0x0 PRP2 0x0 00:30:28.494 [2024-07-11 14:00:24.097884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.494 [2024-07-11 14:00:24.097924] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2339110 was disconnected and freed. reset controller. 00:30:28.494 [2024-07-11 14:00:24.097935] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:28.494 [2024-07-11 14:00:24.097955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.494 [2024-07-11 14:00:24.097963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.494 [2024-07-11 14:00:24.097970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.494 [2024-07-11 14:00:24.097976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.494 [2024-07-11 14:00:24.097983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.494 [2024-07-11 14:00:24.097989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.494 [2024-07-11 14:00:24.097995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.494 [2024-07-11 14:00:24.098002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.494 [2024-07-11 14:00:24.098009] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:28.494 [2024-07-11 14:00:24.098038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23164d0 (9): Bad file descriptor 00:30:28.494 [2024-07-11 14:00:24.099862] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:28.494 [2024-07-11 14:00:24.120513] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:28.494 00:30:28.494 Latency(us) 00:30:28.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.494 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:28.494 Verification LBA range: start 0x0 length 0x4000 00:30:28.494 NVMe0n1 : 15.01 16588.44 64.80 1060.22 0.00 7239.55 619.74 14303.94 00:30:28.494 =================================================================================================================== 00:30:28.494 Total : 16588.44 64.80 1060.22 0.00 7239.55 619.74 14303.94 00:30:28.494 Received shutdown signal, test time was about 15.000000 seconds 00:30:28.494 00:30:28.494 Latency(us) 00:30:28.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.494 =================================================================================================================== 00:30:28.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:28.494 14:00:30 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:28.494 14:00:30 -- host/failover.sh@65 -- # count=3 00:30:28.494 14:00:30 -- host/failover.sh@67 -- # (( count != 3 )) 00:30:28.494 14:00:30 -- host/failover.sh@73 -- # bdevperf_pid=1758280 00:30:28.494 14:00:30 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:28.494 14:00:30 -- host/failover.sh@75 -- # waitforlisten 1758280 /var/tmp/bdevperf.sock 00:30:28.494 14:00:30 -- common/autotest_common.sh@819 -- # '[' -z 1758280 ']' 00:30:28.494 14:00:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:28.494 14:00:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:28.494 14:00:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:28.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:28.494 14:00:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:28.494 14:00:30 -- common/autotest_common.sh@10 -- # set +x 00:30:28.753 14:00:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:28.753 14:00:31 -- common/autotest_common.sh@852 -- # return 0 00:30:28.753 14:00:31 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:29.015 [2024-07-11 14:00:31.347952] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:29.015 14:00:31 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:29.274 [2024-07-11 14:00:31.520434] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:29.274 14:00:31 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:29.532 NVMe0n1 00:30:29.532 14:00:31 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:29.791 00:30:29.791 14:00:32 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:30.050 00:30:30.050 14:00:32 -- host/failover.sh@82 -- # grep -q NVMe0 00:30:30.050 14:00:32 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:30.309 14:00:32 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:30.568 14:00:32 -- host/failover.sh@87 -- # sleep 3 00:30:33.860 14:00:35 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:33.860 14:00:35 -- host/failover.sh@88 -- # grep -q NVMe0 00:30:33.860 14:00:36 -- host/failover.sh@90 -- # run_test_pid=1759195 00:30:33.860 14:00:36 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:33.860 14:00:36 -- host/failover.sh@92 -- # wait 1759195 00:30:34.792 0 00:30:34.792 14:00:37 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:34.792 [2024-07-11 14:00:30.398727] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:34.792 [2024-07-11 14:00:30.398780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1758280 ] 00:30:34.792 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.792 [2024-07-11 14:00:30.455660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.792 [2024-07-11 14:00:30.490157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.792 [2024-07-11 14:00:32.808964] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:34.792 [2024-07-11 14:00:32.809013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:34.792 [2024-07-11 14:00:32.809025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.792 [2024-07-11 14:00:32.809033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:34.792 [2024-07-11 14:00:32.809040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.792 [2024-07-11 14:00:32.809047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:34.792 [2024-07-11 14:00:32.809054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.792 [2024-07-11 14:00:32.809061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:34.792 [2024-07-11 14:00:32.809068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:34.792 [2024-07-11 14:00:32.809075] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:34.792 [2024-07-11 14:00:32.809096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:34.792 [2024-07-11 14:00:32.809109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb904d0 (9): Bad file descriptor 00:30:34.792 [2024-07-11 14:00:32.819771] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:34.792 Running I/O for 1 seconds... 00:30:34.792 00:30:34.792 Latency(us) 00:30:34.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.793 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:34.793 Verification LBA range: start 0x0 length 0x4000 00:30:34.793 NVMe0n1 : 1.00 16763.28 65.48 0.00 0.00 7604.69 954.55 9402.99 00:30:34.793 =================================================================================================================== 00:30:34.793 Total : 16763.28 65.48 0.00 0.00 7604.69 954.55 9402.99 00:30:34.793 14:00:37 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:34.793 14:00:37 -- host/failover.sh@95 -- # grep -q NVMe0 00:30:35.051 14:00:37 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:35.309 14:00:37 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:35.309 14:00:37 -- host/failover.sh@99 -- # grep -q NVMe0 00:30:35.309 14:00:37 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:35.568 14:00:37 -- host/failover.sh@101 -- # sleep 3 00:30:38.857 14:00:40 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:38.857 14:00:40 -- host/failover.sh@103 -- # grep -q NVMe0 00:30:38.857 14:00:41 -- host/failover.sh@108 -- # killprocess 1758280 00:30:38.857 14:00:41 -- common/autotest_common.sh@926 -- # '[' -z 1758280 ']' 00:30:38.857 14:00:41 -- common/autotest_common.sh@930 -- # kill -0 1758280 00:30:38.857 14:00:41 -- common/autotest_common.sh@931 -- # uname 00:30:38.857 14:00:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:38.857 14:00:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1758280 00:30:38.857 14:00:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:38.857 14:00:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:38.857 14:00:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1758280' 00:30:38.857 killing process with pid 1758280 00:30:38.857 14:00:41 -- common/autotest_common.sh@945 -- # kill 1758280 00:30:38.857 14:00:41 -- common/autotest_common.sh@950 -- # wait 1758280 00:30:38.857 14:00:41 -- host/failover.sh@110 -- # sync 00:30:38.857 14:00:41 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:39.116 14:00:41 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:39.116 14:00:41 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:39.116 14:00:41 -- host/failover.sh@116 -- # nvmftestfini 00:30:39.116 14:00:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:39.116 14:00:41 -- nvmf/common.sh@116 -- # sync 00:30:39.116 14:00:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:39.116 14:00:41 -- nvmf/common.sh@119 -- # set +e 00:30:39.116 14:00:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:39.116 14:00:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:39.116 rmmod nvme_tcp 00:30:39.116 rmmod nvme_fabrics 00:30:39.116 rmmod nvme_keyring 00:30:39.116 14:00:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:39.116 14:00:41 -- nvmf/common.sh@123 -- # set -e 00:30:39.116 14:00:41 -- nvmf/common.sh@124 -- # return 0 00:30:39.116 14:00:41 -- nvmf/common.sh@477 -- # '[' -n 1755042 ']' 00:30:39.116 14:00:41 -- nvmf/common.sh@478 -- # killprocess 1755042 00:30:39.116 14:00:41 -- common/autotest_common.sh@926 -- # '[' -z 1755042 ']' 00:30:39.116 14:00:41 -- common/autotest_common.sh@930 -- # kill -0 1755042 00:30:39.116 14:00:41 -- common/autotest_common.sh@931 -- # uname 00:30:39.116 14:00:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:39.116 14:00:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1755042 00:30:39.375 14:00:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:39.375 14:00:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:39.375 14:00:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1755042' 00:30:39.375 killing process with pid 1755042 00:30:39.375 14:00:41 -- common/autotest_common.sh@945 -- # kill 1755042 00:30:39.375 14:00:41 -- common/autotest_common.sh@950 -- # wait 1755042 00:30:39.375 14:00:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:39.375 14:00:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:39.375 14:00:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:39.375 14:00:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:39.375 14:00:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:39.375 14:00:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.375 14:00:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:39.375 14:00:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.909 14:00:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:41.909 00:30:41.909 real 0m38.073s 00:30:41.909 user 2m2.387s 00:30:41.909 sys 0m7.619s 00:30:41.909 14:00:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:41.909 14:00:43 -- common/autotest_common.sh@10 -- # set +x 00:30:41.909 ************************************ 00:30:41.909 END TEST nvmf_failover 00:30:41.909 ************************************ 00:30:41.909 14:00:43 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:41.909 14:00:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:41.909 14:00:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:41.909 14:00:43 -- common/autotest_common.sh@10 -- # set +x 00:30:41.909 ************************************ 00:30:41.909 START TEST nvmf_discovery 00:30:41.909 ************************************ 00:30:41.909 14:00:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:41.909 * Looking for test storage... 00:30:41.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:41.909 14:00:43 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.909 14:00:43 -- nvmf/common.sh@7 -- # uname -s 00:30:41.909 14:00:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.909 14:00:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.909 14:00:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.909 14:00:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.909 14:00:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.909 14:00:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.909 14:00:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.909 14:00:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.909 14:00:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.909 14:00:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.909 14:00:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:41.909 14:00:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:41.909 14:00:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.909 14:00:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.909 14:00:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.909 14:00:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.909 14:00:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.909 14:00:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.909 14:00:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.909 14:00:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.909 14:00:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.909 14:00:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.909 14:00:43 -- paths/export.sh@5 -- # export PATH 00:30:41.909 14:00:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.909 14:00:43 -- nvmf/common.sh@46 -- # : 0 00:30:41.909 14:00:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:41.909 14:00:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:41.909 14:00:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:41.910 14:00:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.910 14:00:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.910 14:00:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:41.910 14:00:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:41.910 14:00:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:41.910 14:00:43 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:41.910 14:00:43 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:41.910 14:00:43 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:41.910 14:00:43 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:41.910 14:00:43 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:41.910 14:00:43 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:41.910 14:00:43 -- host/discovery.sh@25 -- # nvmftestinit 00:30:41.910 14:00:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:41.910 14:00:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.910 14:00:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:41.910 14:00:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:41.910 14:00:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:41.910 14:00:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.910 14:00:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:41.910 14:00:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.910 14:00:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:41.910 14:00:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:41.910 14:00:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:41.910 14:00:43 -- common/autotest_common.sh@10 -- # set +x 00:30:47.184 14:00:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:47.184 14:00:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:47.184 14:00:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:47.184 14:00:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:47.184 14:00:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:47.184 14:00:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:47.184 14:00:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:47.184 14:00:48 -- nvmf/common.sh@294 -- # net_devs=() 00:30:47.184 14:00:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:47.184 14:00:48 -- nvmf/common.sh@295 -- # e810=() 00:30:47.184 14:00:48 -- nvmf/common.sh@295 -- # local -ga e810 00:30:47.184 14:00:48 -- nvmf/common.sh@296 -- # x722=() 00:30:47.184 14:00:48 -- nvmf/common.sh@296 -- # local -ga x722 00:30:47.184 14:00:48 -- nvmf/common.sh@297 -- # mlx=() 00:30:47.184 14:00:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:47.184 14:00:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:47.184 14:00:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:47.184 14:00:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:47.184 14:00:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:47.184 14:00:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:47.184 14:00:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:47.184 14:00:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:47.184 14:00:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:47.184 14:00:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:47.184 14:00:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:47.184 14:00:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:47.184 14:00:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:47.184 14:00:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:47.184 14:00:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:47.184 14:00:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:47.184 14:00:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:47.184 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:47.184 14:00:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:47.184 14:00:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:47.184 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:47.184 14:00:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:47.184 14:00:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:47.184 14:00:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.184 14:00:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:47.184 14:00:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.184 14:00:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:47.184 Found net devices under 0000:86:00.0: cvl_0_0 00:30:47.184 14:00:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.184 14:00:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:47.184 14:00:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:47.184 14:00:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:47.184 14:00:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:47.184 14:00:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:47.184 Found net devices under 0000:86:00.1: cvl_0_1 00:30:47.184 14:00:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:47.184 14:00:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:47.184 14:00:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:47.184 14:00:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:47.184 14:00:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:47.184 14:00:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:47.184 14:00:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:47.184 14:00:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:47.184 14:00:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:47.184 14:00:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:47.184 14:00:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:47.184 14:00:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:47.184 14:00:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:47.184 14:00:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:47.184 14:00:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:47.184 14:00:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:47.184 14:00:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:47.184 14:00:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:47.184 14:00:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:47.184 14:00:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:47.184 14:00:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:47.184 14:00:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:47.184 14:00:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:47.184 14:00:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:47.184 14:00:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:47.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:47.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:30:47.184 00:30:47.184 --- 10.0.0.2 ping statistics --- 00:30:47.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.184 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:30:47.184 14:00:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:47.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:47.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:30:47.184 00:30:47.184 --- 10.0.0.1 ping statistics --- 00:30:47.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:47.185 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:30:47.185 14:00:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:47.185 14:00:48 -- nvmf/common.sh@410 -- # return 0 00:30:47.185 14:00:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:47.185 14:00:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:47.185 14:00:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:47.185 14:00:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:47.185 14:00:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:47.185 14:00:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:47.185 14:00:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:47.185 14:00:48 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:47.185 14:00:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:47.185 14:00:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:47.185 14:00:48 -- common/autotest_common.sh@10 -- # set +x 00:30:47.185 14:00:48 -- nvmf/common.sh@469 -- # nvmfpid=1763471 00:30:47.185 14:00:48 -- nvmf/common.sh@470 -- # waitforlisten 1763471 00:30:47.185 14:00:48 -- common/autotest_common.sh@819 -- # '[' -z 1763471 ']' 00:30:47.185 14:00:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.185 14:00:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:47.185 14:00:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:47.185 14:00:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:47.185 14:00:48 -- common/autotest_common.sh@10 -- # set +x 00:30:47.185 14:00:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:47.185 [2024-07-11 14:00:48.978022] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:47.185 [2024-07-11 14:00:48.978062] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:47.185 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.185 [2024-07-11 14:00:49.035122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.185 [2024-07-11 14:00:49.073583] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:47.185 [2024-07-11 14:00:49.073703] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:47.185 [2024-07-11 14:00:49.073712] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:47.185 [2024-07-11 14:00:49.073718] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:47.185 [2024-07-11 14:00:49.073739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.444 14:00:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:47.444 14:00:49 -- common/autotest_common.sh@852 -- # return 0 00:30:47.444 14:00:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:47.444 14:00:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:47.444 14:00:49 -- common/autotest_common.sh@10 -- # set +x 00:30:47.444 14:00:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:47.444 14:00:49 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:47.444 14:00:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:47.444 14:00:49 -- common/autotest_common.sh@10 -- # set +x 00:30:47.444 [2024-07-11 14:00:49.795924] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:47.444 14:00:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.444 14:00:49 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:47.444 14:00:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:47.444 14:00:49 -- common/autotest_common.sh@10 -- # set +x 00:30:47.444 [2024-07-11 14:00:49.804042] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:47.444 14:00:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.444 14:00:49 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:47.444 14:00:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:47.444 14:00:49 -- common/autotest_common.sh@10 -- # set +x 00:30:47.444 null0 00:30:47.444 14:00:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.444 14:00:49 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:47.444 14:00:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:47.444 14:00:49 -- common/autotest_common.sh@10 -- # set +x 00:30:47.444 null1 00:30:47.444 14:00:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.444 14:00:49 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:47.444 14:00:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:47.444 14:00:49 -- common/autotest_common.sh@10 -- # set +x 00:30:47.444 14:00:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:47.444 14:00:49 -- host/discovery.sh@45 -- # hostpid=1763544 00:30:47.444 14:00:49 -- host/discovery.sh@46 -- # waitforlisten 1763544 /tmp/host.sock 00:30:47.444 14:00:49 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:47.444 14:00:49 -- common/autotest_common.sh@819 -- # '[' -z 1763544 ']' 00:30:47.444 14:00:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:30:47.444 14:00:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:47.444 14:00:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:47.444 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:47.444 14:00:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:47.444 14:00:49 -- common/autotest_common.sh@10 -- # set +x 00:30:47.444 [2024-07-11 14:00:49.861793] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:47.444 [2024-07-11 14:00:49.861831] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763544 ] 00:30:47.444 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.709 [2024-07-11 14:00:49.914429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.709 [2024-07-11 14:00:49.951757] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:47.709 [2024-07-11 14:00:49.951872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.278 14:00:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:48.278 14:00:50 -- common/autotest_common.sh@852 -- # return 0 00:30:48.278 14:00:50 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:48.278 14:00:50 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:48.278 14:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.278 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:48.278 14:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.278 14:00:50 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:48.278 14:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.278 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:48.278 14:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.278 14:00:50 -- host/discovery.sh@72 -- # notify_id=0 00:30:48.278 14:00:50 -- host/discovery.sh@78 -- # get_subsystem_names 00:30:48.278 14:00:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.278 14:00:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.278 14:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.278 14:00:50 -- host/discovery.sh@59 -- # sort 00:30:48.278 14:00:50 -- host/discovery.sh@59 -- # xargs 00:30:48.278 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:48.278 14:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.278 14:00:50 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:30:48.278 14:00:50 -- host/discovery.sh@79 -- # get_bdev_list 00:30:48.278 14:00:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.278 14:00:50 -- host/discovery.sh@55 -- # xargs 00:30:48.278 14:00:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.278 14:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.278 14:00:50 -- host/discovery.sh@55 -- # sort 00:30:48.278 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:48.278 14:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.538 14:00:50 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:30:48.538 14:00:50 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:48.538 14:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.538 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:48.538 14:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.538 14:00:50 -- host/discovery.sh@82 -- # get_subsystem_names 00:30:48.538 14:00:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.538 14:00:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.538 14:00:50 -- host/discovery.sh@59 -- # sort 00:30:48.538 14:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.538 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:48.538 14:00:50 -- host/discovery.sh@59 -- # xargs 00:30:48.538 14:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.538 14:00:50 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:30:48.538 14:00:50 -- host/discovery.sh@83 -- # get_bdev_list 00:30:48.538 14:00:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.538 14:00:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.538 14:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.538 14:00:50 -- host/discovery.sh@55 -- # sort 00:30:48.538 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:48.539 14:00:50 -- host/discovery.sh@55 -- # xargs 00:30:48.539 14:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.539 14:00:50 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:48.539 14:00:50 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:48.539 14:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.539 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:48.539 14:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.539 14:00:50 -- host/discovery.sh@86 -- # get_subsystem_names 00:30:48.539 14:00:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.539 14:00:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.539 14:00:50 -- host/discovery.sh@59 -- # sort 00:30:48.539 14:00:50 -- host/discovery.sh@59 -- # xargs 00:30:48.539 14:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.539 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:48.539 14:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.539 14:00:50 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:30:48.539 14:00:50 -- host/discovery.sh@87 -- # get_bdev_list 00:30:48.539 14:00:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.539 14:00:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.539 14:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.539 14:00:50 -- host/discovery.sh@55 -- # sort 00:30:48.539 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:48.539 14:00:50 -- host/discovery.sh@55 -- # xargs 00:30:48.539 14:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.539 14:00:50 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:48.539 14:00:50 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:48.539 14:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.539 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:48.539 [2024-07-11 14:00:50.971147] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.539 14:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.539 14:00:50 -- host/discovery.sh@92 -- # get_subsystem_names 00:30:48.539 14:00:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.539 14:00:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.539 14:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.539 14:00:50 -- host/discovery.sh@59 -- # sort 00:30:48.539 14:00:50 -- host/discovery.sh@59 -- # xargs 00:30:48.539 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:30:48.539 14:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.843 14:00:51 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:48.843 14:00:51 -- host/discovery.sh@93 -- # get_bdev_list 00:30:48.843 14:00:51 -- host/discovery.sh@55 -- # xargs 00:30:48.843 14:00:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.843 14:00:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.843 14:00:51 -- host/discovery.sh@55 -- # sort 00:30:48.843 14:00:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.843 14:00:51 -- common/autotest_common.sh@10 -- # set +x 00:30:48.843 14:00:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.843 14:00:51 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:30:48.843 14:00:51 -- host/discovery.sh@94 -- # get_notification_count 00:30:48.843 14:00:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:48.843 14:00:51 -- host/discovery.sh@74 -- # jq '. | length' 00:30:48.843 14:00:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.843 14:00:51 -- common/autotest_common.sh@10 -- # set +x 00:30:48.843 14:00:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.843 14:00:51 -- host/discovery.sh@74 -- # notification_count=0 00:30:48.843 14:00:51 -- host/discovery.sh@75 -- # notify_id=0 00:30:48.843 14:00:51 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:30:48.843 14:00:51 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:48.843 14:00:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:48.843 14:00:51 -- common/autotest_common.sh@10 -- # set +x 00:30:48.843 14:00:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:48.843 14:00:51 -- host/discovery.sh@100 -- # sleep 1 00:30:49.411 [2024-07-11 14:00:51.714328] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:49.411 [2024-07-11 14:00:51.714351] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:49.411 [2024-07-11 14:00:51.714365] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:49.411 [2024-07-11 14:00:51.800646] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:49.670 [2024-07-11 14:00:52.021986] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:49.670 [2024-07-11 14:00:52.022007] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:49.930 14:00:52 -- host/discovery.sh@101 -- # get_subsystem_names 00:30:49.930 14:00:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:49.930 14:00:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:49.930 14:00:52 -- host/discovery.sh@59 -- # sort 00:30:49.930 14:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.930 14:00:52 -- host/discovery.sh@59 -- # xargs 00:30:49.930 14:00:52 -- common/autotest_common.sh@10 -- # set +x 00:30:49.930 14:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.930 14:00:52 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.930 14:00:52 -- host/discovery.sh@102 -- # get_bdev_list 00:30:49.930 14:00:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:49.930 14:00:52 -- host/discovery.sh@55 -- # xargs 00:30:49.930 14:00:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:49.930 14:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.930 14:00:52 -- host/discovery.sh@55 -- # sort 00:30:49.930 14:00:52 -- common/autotest_common.sh@10 -- # set +x 00:30:49.930 14:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.930 14:00:52 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:49.930 14:00:52 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:30:49.930 14:00:52 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:49.930 14:00:52 -- host/discovery.sh@63 -- # xargs 00:30:49.930 14:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.930 14:00:52 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:49.930 14:00:52 -- common/autotest_common.sh@10 -- # set +x 00:30:49.930 14:00:52 -- host/discovery.sh@63 -- # sort -n 00:30:49.930 14:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.930 14:00:52 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:30:49.930 14:00:52 -- host/discovery.sh@104 -- # get_notification_count 00:30:49.930 14:00:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:49.930 14:00:52 -- host/discovery.sh@74 -- # jq '. | length' 00:30:49.930 14:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.930 14:00:52 -- common/autotest_common.sh@10 -- # set +x 00:30:49.930 14:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.930 14:00:52 -- host/discovery.sh@74 -- # notification_count=1 00:30:49.930 14:00:52 -- host/discovery.sh@75 -- # notify_id=1 00:30:49.930 14:00:52 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:30:49.930 14:00:52 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:49.930 14:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.930 14:00:52 -- common/autotest_common.sh@10 -- # set +x 00:30:49.930 14:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.930 14:00:52 -- host/discovery.sh@109 -- # sleep 1 00:30:51.309 14:00:53 -- host/discovery.sh@110 -- # get_bdev_list 00:30:51.309 14:00:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.309 14:00:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:51.309 14:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.309 14:00:53 -- host/discovery.sh@55 -- # sort 00:30:51.309 14:00:53 -- common/autotest_common.sh@10 -- # set +x 00:30:51.309 14:00:53 -- host/discovery.sh@55 -- # xargs 00:30:51.309 14:00:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.309 14:00:53 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:51.309 14:00:53 -- host/discovery.sh@111 -- # get_notification_count 00:30:51.309 14:00:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:51.309 14:00:53 -- host/discovery.sh@74 -- # jq '. | length' 00:30:51.309 14:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.309 14:00:53 -- common/autotest_common.sh@10 -- # set +x 00:30:51.309 14:00:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.309 14:00:53 -- host/discovery.sh@74 -- # notification_count=1 00:30:51.309 14:00:53 -- host/discovery.sh@75 -- # notify_id=2 00:30:51.309 14:00:53 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:30:51.309 14:00:53 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:51.309 14:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.309 14:00:53 -- common/autotest_common.sh@10 -- # set +x 00:30:51.309 [2024-07-11 14:00:53.437987] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:51.309 [2024-07-11 14:00:53.438760] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:51.309 [2024-07-11 14:00:53.438782] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:51.309 14:00:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.309 14:00:53 -- host/discovery.sh@117 -- # sleep 1 00:30:51.309 [2024-07-11 14:00:53.527027] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:51.309 [2024-07-11 14:00:53.631631] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:51.309 [2024-07-11 14:00:53.631646] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:51.309 [2024-07-11 14:00:53.631651] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:52.246 14:00:54 -- host/discovery.sh@118 -- # get_subsystem_names 00:30:52.246 14:00:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:52.246 14:00:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.246 14:00:54 -- common/autotest_common.sh@10 -- # set +x 00:30:52.246 14:00:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:52.246 14:00:54 -- host/discovery.sh@59 -- # sort 00:30:52.246 14:00:54 -- host/discovery.sh@59 -- # xargs 00:30:52.246 14:00:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.246 14:00:54 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.246 14:00:54 -- host/discovery.sh@119 -- # get_bdev_list 00:30:52.246 14:00:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:52.246 14:00:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:52.246 14:00:54 -- host/discovery.sh@55 -- # sort 00:30:52.246 14:00:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.246 14:00:54 -- host/discovery.sh@55 -- # xargs 00:30:52.246 14:00:54 -- common/autotest_common.sh@10 -- # set +x 00:30:52.246 14:00:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.246 14:00:54 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:52.246 14:00:54 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:30:52.246 14:00:54 -- host/discovery.sh@63 -- # xargs 00:30:52.246 14:00:54 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:52.246 14:00:54 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:52.246 14:00:54 -- host/discovery.sh@63 -- # sort -n 00:30:52.246 14:00:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.246 14:00:54 -- common/autotest_common.sh@10 -- # set +x 00:30:52.246 14:00:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.246 14:00:54 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:52.246 14:00:54 -- host/discovery.sh@121 -- # get_notification_count 00:30:52.246 14:00:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:52.246 14:00:54 -- host/discovery.sh@74 -- # jq '. | length' 00:30:52.246 14:00:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.246 14:00:54 -- common/autotest_common.sh@10 -- # set +x 00:30:52.246 14:00:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.246 14:00:54 -- host/discovery.sh@74 -- # notification_count=0 00:30:52.246 14:00:54 -- host/discovery.sh@75 -- # notify_id=2 00:30:52.246 14:00:54 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:30:52.246 14:00:54 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:52.246 14:00:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.246 14:00:54 -- common/autotest_common.sh@10 -- # set +x 00:30:52.246 [2024-07-11 14:00:54.637660] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:52.246 [2024-07-11 14:00:54.637681] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:52.246 14:00:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.246 14:00:54 -- host/discovery.sh@127 -- # sleep 1 00:30:52.246 [2024-07-11 14:00:54.646334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.246 [2024-07-11 14:00:54.646350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.246 [2024-07-11 14:00:54.646358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.246 [2024-07-11 14:00:54.646365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.246 [2024-07-11 14:00:54.646372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.246 [2024-07-11 14:00:54.646379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.246 [2024-07-11 14:00:54.646386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:52.246 [2024-07-11 14:00:54.646393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:52.246 [2024-07-11 14:00:54.646400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf8a0 is same with the state(5) to be set 00:30:52.246 [2024-07-11 14:00:54.656349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cf8a0 (9): Bad file descriptor 00:30:52.246 [2024-07-11 14:00:54.666387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:52.246 [2024-07-11 14:00:54.666632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.246 [2024-07-11 14:00:54.666819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.246 [2024-07-11 14:00:54.666829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6cf8a0 with addr=10.0.0.2, port=4420 00:30:52.246 [2024-07-11 14:00:54.666836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf8a0 is same with the state(5) to be set 00:30:52.246 [2024-07-11 14:00:54.666847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cf8a0 (9): Bad file descriptor 00:30:52.246 [2024-07-11 14:00:54.666863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:52.246 [2024-07-11 14:00:54.666873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:52.246 [2024-07-11 14:00:54.666881] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:52.246 [2024-07-11 14:00:54.666891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:52.246 [2024-07-11 14:00:54.676438] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:52.246 [2024-07-11 14:00:54.676710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.246 [2024-07-11 14:00:54.676856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.246 [2024-07-11 14:00:54.676867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6cf8a0 with addr=10.0.0.2, port=4420 00:30:52.246 [2024-07-11 14:00:54.676873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf8a0 is same with the state(5) to be set 00:30:52.246 [2024-07-11 14:00:54.676883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cf8a0 (9): Bad file descriptor 00:30:52.246 [2024-07-11 14:00:54.676900] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:52.246 [2024-07-11 14:00:54.676907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:52.246 [2024-07-11 14:00:54.676913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:52.246 [2024-07-11 14:00:54.676922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:52.246 [2024-07-11 14:00:54.686488] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:52.246 [2024-07-11 14:00:54.686721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.246 [2024-07-11 14:00:54.686857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.246 [2024-07-11 14:00:54.686868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6cf8a0 with addr=10.0.0.2, port=4420 00:30:52.246 [2024-07-11 14:00:54.686875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf8a0 is same with the state(5) to be set 00:30:52.246 [2024-07-11 14:00:54.686884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cf8a0 (9): Bad file descriptor 00:30:52.246 [2024-07-11 14:00:54.686894] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:52.246 [2024-07-11 14:00:54.686900] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:52.246 [2024-07-11 14:00:54.686907] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:52.246 [2024-07-11 14:00:54.686916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:52.246 [2024-07-11 14:00:54.696540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:52.246 [2024-07-11 14:00:54.696797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.246 [2024-07-11 14:00:54.696934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.246 [2024-07-11 14:00:54.696943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6cf8a0 with addr=10.0.0.2, port=4420 00:30:52.246 [2024-07-11 14:00:54.696950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf8a0 is same with the state(5) to be set 00:30:52.246 [2024-07-11 14:00:54.696959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cf8a0 (9): Bad file descriptor 00:30:52.246 [2024-07-11 14:00:54.696975] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:52.246 [2024-07-11 14:00:54.696982] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:52.246 [2024-07-11 14:00:54.696991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:52.246 [2024-07-11 14:00:54.697000] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:52.506 [2024-07-11 14:00:54.706589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:52.506 [2024-07-11 14:00:54.706789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.506 [2024-07-11 14:00:54.706993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.506 [2024-07-11 14:00:54.707003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6cf8a0 with addr=10.0.0.2, port=4420 00:30:52.506 [2024-07-11 14:00:54.707010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf8a0 is same with the state(5) to be set 00:30:52.506 [2024-07-11 14:00:54.707019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cf8a0 (9): Bad file descriptor 00:30:52.506 [2024-07-11 14:00:54.707035] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:52.506 [2024-07-11 14:00:54.707041] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:52.506 [2024-07-11 14:00:54.707047] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:52.506 [2024-07-11 14:00:54.707056] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:52.506 [2024-07-11 14:00:54.716637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:52.506 [2024-07-11 14:00:54.716787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.506 [2024-07-11 14:00:54.716976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.506 [2024-07-11 14:00:54.716985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6cf8a0 with addr=10.0.0.2, port=4420 00:30:52.506 [2024-07-11 14:00:54.716991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf8a0 is same with the state(5) to be set 00:30:52.506 [2024-07-11 14:00:54.717001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cf8a0 (9): Bad file descriptor 00:30:52.506 [2024-07-11 14:00:54.717010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:52.506 [2024-07-11 14:00:54.717015] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:52.506 [2024-07-11 14:00:54.717022] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:52.506 [2024-07-11 14:00:54.717030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:52.506 [2024-07-11 14:00:54.724090] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:52.506 [2024-07-11 14:00:54.724106] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:53.445 14:00:55 -- host/discovery.sh@128 -- # get_subsystem_names 00:30:53.445 14:00:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:53.445 14:00:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:53.445 14:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.445 14:00:55 -- host/discovery.sh@59 -- # sort 00:30:53.445 14:00:55 -- common/autotest_common.sh@10 -- # set +x 00:30:53.445 14:00:55 -- host/discovery.sh@59 -- # xargs 00:30:53.445 14:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.445 14:00:55 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:53.445 14:00:55 -- host/discovery.sh@129 -- # get_bdev_list 00:30:53.445 14:00:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.445 14:00:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:53.445 14:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.445 14:00:55 -- host/discovery.sh@55 -- # sort 00:30:53.445 14:00:55 -- common/autotest_common.sh@10 -- # set +x 00:30:53.445 14:00:55 -- host/discovery.sh@55 -- # xargs 00:30:53.445 14:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.445 14:00:55 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:53.445 14:00:55 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:30:53.445 14:00:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:53.445 14:00:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:53.445 14:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.445 14:00:55 -- host/discovery.sh@63 -- # sort -n 00:30:53.445 14:00:55 -- common/autotest_common.sh@10 -- # set +x 00:30:53.445 14:00:55 -- host/discovery.sh@63 -- # xargs 00:30:53.445 14:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.445 14:00:55 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:30:53.445 14:00:55 -- host/discovery.sh@131 -- # get_notification_count 00:30:53.445 14:00:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:53.445 14:00:55 -- host/discovery.sh@74 -- # jq '. | length' 00:30:53.445 14:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.445 14:00:55 -- common/autotest_common.sh@10 -- # set +x 00:30:53.445 14:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.445 14:00:55 -- host/discovery.sh@74 -- # notification_count=0 00:30:53.445 14:00:55 -- host/discovery.sh@75 -- # notify_id=2 00:30:53.445 14:00:55 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:30:53.445 14:00:55 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:53.445 14:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.445 14:00:55 -- common/autotest_common.sh@10 -- # set +x 00:30:53.445 14:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.445 14:00:55 -- host/discovery.sh@135 -- # sleep 1 00:30:54.824 14:00:56 -- host/discovery.sh@136 -- # get_subsystem_names 00:30:54.824 14:00:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:54.824 14:00:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.824 14:00:56 -- common/autotest_common.sh@10 -- # set +x 00:30:54.824 14:00:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:54.824 14:00:56 -- host/discovery.sh@59 -- # sort 00:30:54.824 14:00:56 -- host/discovery.sh@59 -- # xargs 00:30:54.824 14:00:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.824 14:00:56 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:30:54.824 14:00:56 -- host/discovery.sh@137 -- # get_bdev_list 00:30:54.824 14:00:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:54.824 14:00:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.824 14:00:56 -- host/discovery.sh@55 -- # xargs 00:30:54.824 14:00:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.824 14:00:56 -- host/discovery.sh@55 -- # sort 00:30:54.824 14:00:56 -- common/autotest_common.sh@10 -- # set +x 00:30:54.824 14:00:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.824 14:00:56 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:30:54.824 14:00:56 -- host/discovery.sh@138 -- # get_notification_count 00:30:54.824 14:00:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:54.824 14:00:56 -- host/discovery.sh@74 -- # jq '. | length' 00:30:54.824 14:00:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.824 14:00:56 -- common/autotest_common.sh@10 -- # set +x 00:30:54.824 14:00:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.824 14:00:56 -- host/discovery.sh@74 -- # notification_count=2 00:30:54.824 14:00:56 -- host/discovery.sh@75 -- # notify_id=4 00:30:54.824 14:00:56 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:30:54.824 14:00:56 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:54.824 14:00:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.824 14:00:56 -- common/autotest_common.sh@10 -- # set +x 00:30:55.760 [2024-07-11 14:00:58.040704] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:55.760 [2024-07-11 14:00:58.040721] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:55.761 [2024-07-11 14:00:58.040731] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:55.761 [2024-07-11 14:00:58.129003] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:56.020 [2024-07-11 14:00:58.442022] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:56.020 [2024-07-11 14:00:58.442049] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:56.020 14:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.020 14:00:58 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:56.020 14:00:58 -- common/autotest_common.sh@640 -- # local es=0 00:30:56.020 14:00:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:56.020 14:00:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:56.020 14:00:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:56.020 14:00:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:56.020 14:00:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:56.020 14:00:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:56.020 14:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.020 14:00:58 -- common/autotest_common.sh@10 -- # set +x 00:30:56.020 request: 00:30:56.020 { 00:30:56.020 "name": "nvme", 00:30:56.020 "trtype": "tcp", 00:30:56.020 "traddr": "10.0.0.2", 00:30:56.020 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:56.020 "adrfam": "ipv4", 00:30:56.020 "trsvcid": "8009", 00:30:56.020 "wait_for_attach": true, 00:30:56.020 "method": "bdev_nvme_start_discovery", 00:30:56.020 "req_id": 1 00:30:56.020 } 00:30:56.020 Got JSON-RPC error response 00:30:56.020 response: 00:30:56.020 { 00:30:56.020 "code": -17, 00:30:56.020 "message": "File exists" 00:30:56.020 } 00:30:56.020 14:00:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:56.020 14:00:58 -- common/autotest_common.sh@643 -- # es=1 00:30:56.020 14:00:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:56.020 14:00:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:56.020 14:00:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:56.020 14:00:58 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:30:56.020 14:00:58 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:56.020 14:00:58 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:56.020 14:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.020 14:00:58 -- host/discovery.sh@67 -- # sort 00:30:56.020 14:00:58 -- common/autotest_common.sh@10 -- # set +x 00:30:56.020 14:00:58 -- host/discovery.sh@67 -- # xargs 00:30:56.279 14:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.279 14:00:58 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:30:56.279 14:00:58 -- host/discovery.sh@147 -- # get_bdev_list 00:30:56.279 14:00:58 -- host/discovery.sh@55 -- # sort 00:30:56.279 14:00:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.279 14:00:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:56.279 14:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.279 14:00:58 -- common/autotest_common.sh@10 -- # set +x 00:30:56.279 14:00:58 -- host/discovery.sh@55 -- # xargs 00:30:56.279 14:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.279 14:00:58 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:56.279 14:00:58 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:56.279 14:00:58 -- common/autotest_common.sh@640 -- # local es=0 00:30:56.279 14:00:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:56.279 14:00:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:56.279 14:00:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:56.279 14:00:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:56.279 14:00:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:56.279 14:00:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:56.279 14:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.279 14:00:58 -- common/autotest_common.sh@10 -- # set +x 00:30:56.279 request: 00:30:56.279 { 00:30:56.279 "name": "nvme_second", 00:30:56.279 "trtype": "tcp", 00:30:56.279 "traddr": "10.0.0.2", 00:30:56.279 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:56.279 "adrfam": "ipv4", 00:30:56.279 "trsvcid": "8009", 00:30:56.279 "wait_for_attach": true, 00:30:56.279 "method": "bdev_nvme_start_discovery", 00:30:56.279 "req_id": 1 00:30:56.279 } 00:30:56.279 Got JSON-RPC error response 00:30:56.279 response: 00:30:56.279 { 00:30:56.279 "code": -17, 00:30:56.279 "message": "File exists" 00:30:56.279 } 00:30:56.279 14:00:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:56.279 14:00:58 -- common/autotest_common.sh@643 -- # es=1 00:30:56.279 14:00:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:56.279 14:00:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:56.279 14:00:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:56.279 14:00:58 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:30:56.279 14:00:58 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:56.279 14:00:58 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:56.279 14:00:58 -- host/discovery.sh@67 -- # sort 00:30:56.279 14:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.279 14:00:58 -- host/discovery.sh@67 -- # xargs 00:30:56.279 14:00:58 -- common/autotest_common.sh@10 -- # set +x 00:30:56.279 14:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.279 14:00:58 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:30:56.279 14:00:58 -- host/discovery.sh@153 -- # get_bdev_list 00:30:56.279 14:00:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.279 14:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.279 14:00:58 -- common/autotest_common.sh@10 -- # set +x 00:30:56.279 14:00:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:56.279 14:00:58 -- host/discovery.sh@55 -- # sort 00:30:56.279 14:00:58 -- host/discovery.sh@55 -- # xargs 00:30:56.279 14:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:56.279 14:00:58 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:56.279 14:00:58 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:56.279 14:00:58 -- common/autotest_common.sh@640 -- # local es=0 00:30:56.279 14:00:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:56.279 14:00:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:30:56.279 14:00:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:56.279 14:00:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:30:56.279 14:00:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:30:56.280 14:00:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:56.280 14:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:56.280 14:00:58 -- common/autotest_common.sh@10 -- # set +x 00:30:57.657 [2024-07-11 14:00:59.685521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.657 [2024-07-11 14:00:59.685710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.657 [2024-07-11 14:00:59.685722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x70cb10 with addr=10.0.0.2, port=8010 00:30:57.657 [2024-07-11 14:00:59.685733] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:57.657 [2024-07-11 14:00:59.685740] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:57.657 [2024-07-11 14:00:59.685746] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:58.593 [2024-07-11 14:01:00.687993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.593 [2024-07-11 14:01:00.688287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.593 [2024-07-11 14:01:00.688299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x70cb10 with addr=10.0.0.2, port=8010 00:30:58.593 [2024-07-11 14:01:00.688313] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:58.593 [2024-07-11 14:01:00.688319] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:58.593 [2024-07-11 14:01:00.688325] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:59.535 [2024-07-11 14:01:01.690115] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:59.535 request: 00:30:59.535 { 00:30:59.535 "name": "nvme_second", 00:30:59.535 "trtype": "tcp", 00:30:59.535 "traddr": "10.0.0.2", 00:30:59.535 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:59.535 "adrfam": "ipv4", 00:30:59.535 "trsvcid": "8010", 00:30:59.535 "attach_timeout_ms": 3000, 00:30:59.535 "method": "bdev_nvme_start_discovery", 00:30:59.535 "req_id": 1 00:30:59.535 } 00:30:59.535 Got JSON-RPC error response 00:30:59.535 response: 00:30:59.535 { 00:30:59.535 "code": -110, 00:30:59.535 "message": "Connection timed out" 00:30:59.535 } 00:30:59.535 14:01:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:30:59.535 14:01:01 -- common/autotest_common.sh@643 -- # es=1 00:30:59.535 14:01:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:30:59.535 14:01:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:30:59.535 14:01:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:30:59.535 14:01:01 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:30:59.535 14:01:01 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:59.535 14:01:01 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:59.535 14:01:01 -- host/discovery.sh@67 -- # sort 00:30:59.535 14:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:59.535 14:01:01 -- host/discovery.sh@67 -- # xargs 00:30:59.535 14:01:01 -- common/autotest_common.sh@10 -- # set +x 00:30:59.535 14:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:59.535 14:01:01 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:30:59.535 14:01:01 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:30:59.535 14:01:01 -- host/discovery.sh@162 -- # kill 1763544 00:30:59.535 14:01:01 -- host/discovery.sh@163 -- # nvmftestfini 00:30:59.535 14:01:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:59.535 14:01:01 -- nvmf/common.sh@116 -- # sync 00:30:59.535 14:01:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:59.535 14:01:01 -- nvmf/common.sh@119 -- # set +e 00:30:59.535 14:01:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:59.535 14:01:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:59.535 rmmod nvme_tcp 00:30:59.535 rmmod nvme_fabrics 00:30:59.535 rmmod nvme_keyring 00:30:59.535 14:01:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:59.535 14:01:01 -- nvmf/common.sh@123 -- # set -e 00:30:59.535 14:01:01 -- nvmf/common.sh@124 -- # return 0 00:30:59.535 14:01:01 -- nvmf/common.sh@477 -- # '[' -n 1763471 ']' 00:30:59.535 14:01:01 -- nvmf/common.sh@478 -- # killprocess 1763471 00:30:59.535 14:01:01 -- common/autotest_common.sh@926 -- # '[' -z 1763471 ']' 00:30:59.535 14:01:01 -- common/autotest_common.sh@930 -- # kill -0 1763471 00:30:59.535 14:01:01 -- common/autotest_common.sh@931 -- # uname 00:30:59.535 14:01:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:59.535 14:01:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1763471 00:30:59.535 14:01:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:59.535 14:01:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:59.535 14:01:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1763471' 00:30:59.535 killing process with pid 1763471 00:30:59.535 14:01:01 -- common/autotest_common.sh@945 -- # kill 1763471 00:30:59.535 14:01:01 -- common/autotest_common.sh@950 -- # wait 1763471 00:30:59.794 14:01:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:59.794 14:01:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:59.794 14:01:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:59.794 14:01:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:59.794 14:01:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:59.794 14:01:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.794 14:01:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:59.794 14:01:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.697 14:01:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:01.697 00:31:01.697 real 0m20.226s 00:31:01.697 user 0m27.776s 00:31:01.697 sys 0m5.053s 00:31:01.697 14:01:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:01.697 14:01:04 -- common/autotest_common.sh@10 -- # set +x 00:31:01.697 ************************************ 00:31:01.697 END TEST nvmf_discovery 00:31:01.697 ************************************ 00:31:01.697 14:01:04 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:01.697 14:01:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:01.697 14:01:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:01.697 14:01:04 -- common/autotest_common.sh@10 -- # set +x 00:31:01.697 ************************************ 00:31:01.697 START TEST nvmf_discovery_remove_ifc 00:31:01.697 ************************************ 00:31:01.697 14:01:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:01.957 * Looking for test storage... 00:31:01.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:01.957 14:01:04 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.957 14:01:04 -- nvmf/common.sh@7 -- # uname -s 00:31:01.957 14:01:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.957 14:01:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.957 14:01:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.957 14:01:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.957 14:01:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.957 14:01:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.957 14:01:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.957 14:01:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.957 14:01:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.957 14:01:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.957 14:01:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:01.957 14:01:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:01.957 14:01:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.957 14:01:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.957 14:01:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.957 14:01:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.957 14:01:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.957 14:01:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.957 14:01:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.957 14:01:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.957 14:01:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.957 14:01:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.957 14:01:04 -- paths/export.sh@5 -- # export PATH 00:31:01.957 14:01:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.957 14:01:04 -- nvmf/common.sh@46 -- # : 0 00:31:01.957 14:01:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:01.957 14:01:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:01.957 14:01:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:01.957 14:01:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.957 14:01:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.957 14:01:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:01.957 14:01:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:01.957 14:01:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:01.957 14:01:04 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:01.957 14:01:04 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:01.957 14:01:04 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:01.957 14:01:04 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:01.957 14:01:04 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:01.957 14:01:04 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:01.957 14:01:04 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:01.957 14:01:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:01.957 14:01:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.957 14:01:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:01.957 14:01:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:01.957 14:01:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:01.957 14:01:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.957 14:01:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:01.957 14:01:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.957 14:01:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:01.957 14:01:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:01.957 14:01:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:01.957 14:01:04 -- common/autotest_common.sh@10 -- # set +x 00:31:07.231 14:01:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:07.231 14:01:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:07.231 14:01:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:07.231 14:01:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:07.231 14:01:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:07.231 14:01:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:07.231 14:01:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:07.231 14:01:09 -- nvmf/common.sh@294 -- # net_devs=() 00:31:07.231 14:01:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:07.231 14:01:09 -- nvmf/common.sh@295 -- # e810=() 00:31:07.231 14:01:09 -- nvmf/common.sh@295 -- # local -ga e810 00:31:07.231 14:01:09 -- nvmf/common.sh@296 -- # x722=() 00:31:07.231 14:01:09 -- nvmf/common.sh@296 -- # local -ga x722 00:31:07.231 14:01:09 -- nvmf/common.sh@297 -- # mlx=() 00:31:07.231 14:01:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:07.231 14:01:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.231 14:01:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.231 14:01:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.231 14:01:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.231 14:01:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.231 14:01:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.231 14:01:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.231 14:01:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.231 14:01:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.231 14:01:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.231 14:01:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.231 14:01:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:07.231 14:01:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:07.231 14:01:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:07.231 14:01:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:07.231 14:01:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:07.231 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:07.231 14:01:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:07.231 14:01:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:07.231 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:07.231 14:01:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:07.231 14:01:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:07.231 14:01:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.231 14:01:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:07.231 14:01:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.231 14:01:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:07.231 Found net devices under 0000:86:00.0: cvl_0_0 00:31:07.231 14:01:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.231 14:01:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:07.231 14:01:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.231 14:01:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:07.231 14:01:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.231 14:01:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:07.231 Found net devices under 0000:86:00.1: cvl_0_1 00:31:07.231 14:01:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.231 14:01:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:07.231 14:01:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:07.231 14:01:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:07.231 14:01:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:07.231 14:01:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.231 14:01:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.231 14:01:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.231 14:01:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:07.231 14:01:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.231 14:01:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.231 14:01:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:07.231 14:01:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.231 14:01:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.231 14:01:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:07.231 14:01:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:07.231 14:01:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.231 14:01:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.231 14:01:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.231 14:01:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.231 14:01:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:07.231 14:01:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.231 14:01:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.231 14:01:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.231 14:01:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:07.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:31:07.231 00:31:07.231 --- 10.0.0.2 ping statistics --- 00:31:07.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.232 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:31:07.232 14:01:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:07.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:31:07.232 00:31:07.232 --- 10.0.0.1 ping statistics --- 00:31:07.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.232 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:31:07.232 14:01:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.232 14:01:09 -- nvmf/common.sh@410 -- # return 0 00:31:07.232 14:01:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:07.232 14:01:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.232 14:01:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:07.232 14:01:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:07.232 14:01:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.232 14:01:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:07.232 14:01:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:07.232 14:01:09 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:07.232 14:01:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:07.232 14:01:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:07.232 14:01:09 -- common/autotest_common.sh@10 -- # set +x 00:31:07.232 14:01:09 -- nvmf/common.sh@469 -- # nvmfpid=1769113 00:31:07.232 14:01:09 -- nvmf/common.sh@470 -- # waitforlisten 1769113 00:31:07.232 14:01:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:07.232 14:01:09 -- common/autotest_common.sh@819 -- # '[' -z 1769113 ']' 00:31:07.232 14:01:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.232 14:01:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:07.232 14:01:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.232 14:01:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:07.232 14:01:09 -- common/autotest_common.sh@10 -- # set +x 00:31:07.232 [2024-07-11 14:01:09.574094] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:07.232 [2024-07-11 14:01:09.574134] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.232 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.232 [2024-07-11 14:01:09.629862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.232 [2024-07-11 14:01:09.667669] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:07.232 [2024-07-11 14:01:09.667777] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.232 [2024-07-11 14:01:09.667784] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.232 [2024-07-11 14:01:09.667790] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.232 [2024-07-11 14:01:09.667806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.168 14:01:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:08.168 14:01:10 -- common/autotest_common.sh@852 -- # return 0 00:31:08.168 14:01:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:08.168 14:01:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:08.168 14:01:10 -- common/autotest_common.sh@10 -- # set +x 00:31:08.168 14:01:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.168 14:01:10 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:08.168 14:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.168 14:01:10 -- common/autotest_common.sh@10 -- # set +x 00:31:08.168 [2024-07-11 14:01:10.405044] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.168 [2024-07-11 14:01:10.413157] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:08.168 null0 00:31:08.168 [2024-07-11 14:01:10.445189] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.168 14:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.168 14:01:10 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1769359 00:31:08.168 14:01:10 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1769359 /tmp/host.sock 00:31:08.168 14:01:10 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:08.168 14:01:10 -- common/autotest_common.sh@819 -- # '[' -z 1769359 ']' 00:31:08.168 14:01:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:08.168 14:01:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:08.168 14:01:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:08.168 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:08.168 14:01:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:08.168 14:01:10 -- common/autotest_common.sh@10 -- # set +x 00:31:08.168 [2024-07-11 14:01:10.511295] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:08.168 [2024-07-11 14:01:10.511334] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1769359 ] 00:31:08.168 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.168 [2024-07-11 14:01:10.566691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.168 [2024-07-11 14:01:10.605617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:08.168 [2024-07-11 14:01:10.605734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.427 14:01:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:08.427 14:01:10 -- common/autotest_common.sh@852 -- # return 0 00:31:08.427 14:01:10 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:08.428 14:01:10 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:08.428 14:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.428 14:01:10 -- common/autotest_common.sh@10 -- # set +x 00:31:08.428 14:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.428 14:01:10 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:08.428 14:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.428 14:01:10 -- common/autotest_common.sh@10 -- # set +x 00:31:08.428 14:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.428 14:01:10 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:08.428 14:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.428 14:01:10 -- common/autotest_common.sh@10 -- # set +x 00:31:09.363 [2024-07-11 14:01:11.741546] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:09.364 [2024-07-11 14:01:11.741568] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:09.364 [2024-07-11 14:01:11.741580] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:09.622 [2024-07-11 14:01:11.827837] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:09.622 [2024-07-11 14:01:12.044062] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:09.622 [2024-07-11 14:01:12.044096] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:09.622 [2024-07-11 14:01:12.044116] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:09.622 [2024-07-11 14:01:12.044129] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:09.622 [2024-07-11 14:01:12.044147] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:09.622 14:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.622 14:01:12 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:09.622 14:01:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:09.622 [2024-07-11 14:01:12.050839] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9c9130 was disconnected and freed. delete nvme_qpair. 00:31:09.622 14:01:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.622 14:01:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:09.622 14:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.622 14:01:12 -- common/autotest_common.sh@10 -- # set +x 00:31:09.622 14:01:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:09.622 14:01:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:09.622 14:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.881 14:01:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:09.881 14:01:12 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:09.881 14:01:12 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:09.881 14:01:12 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:09.881 14:01:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:09.881 14:01:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:09.881 14:01:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:09.881 14:01:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:09.881 14:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.881 14:01:12 -- common/autotest_common.sh@10 -- # set +x 00:31:09.881 14:01:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:09.881 14:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.881 14:01:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:09.881 14:01:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:10.817 14:01:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:10.817 14:01:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.817 14:01:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:10.817 14:01:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:10.817 14:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.817 14:01:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:10.817 14:01:13 -- common/autotest_common.sh@10 -- # set +x 00:31:11.076 14:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.076 14:01:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:11.076 14:01:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:12.058 14:01:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:12.058 14:01:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:12.058 14:01:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.058 14:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.058 14:01:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:12.058 14:01:14 -- common/autotest_common.sh@10 -- # set +x 00:31:12.058 14:01:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:12.058 14:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.058 14:01:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:12.058 14:01:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:12.996 14:01:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:12.996 14:01:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.996 14:01:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:12.996 14:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.996 14:01:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:12.997 14:01:15 -- common/autotest_common.sh@10 -- # set +x 00:31:12.997 14:01:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:12.997 14:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.997 14:01:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:12.997 14:01:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:14.376 14:01:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:14.376 14:01:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.376 14:01:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:14.376 14:01:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:14.376 14:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.376 14:01:16 -- common/autotest_common.sh@10 -- # set +x 00:31:14.376 14:01:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:14.376 14:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.376 14:01:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:14.376 14:01:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:15.313 14:01:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:15.313 14:01:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:15.313 14:01:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.313 14:01:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:15.313 14:01:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.313 14:01:17 -- common/autotest_common.sh@10 -- # set +x 00:31:15.313 14:01:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:15.313 14:01:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.313 [2024-07-11 14:01:17.485619] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:15.313 [2024-07-11 14:01:17.485653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.313 [2024-07-11 14:01:17.485663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.313 [2024-07-11 14:01:17.485673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.313 [2024-07-11 14:01:17.485680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.313 [2024-07-11 14:01:17.485687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.313 [2024-07-11 14:01:17.485694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.313 [2024-07-11 14:01:17.485700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.313 [2024-07-11 14:01:17.485707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.313 [2024-07-11 14:01:17.485714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.313 [2024-07-11 14:01:17.485720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.313 [2024-07-11 14:01:17.485727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f590 is same with the state(5) to be set 00:31:15.313 [2024-07-11 14:01:17.495640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98f590 (9): Bad file descriptor 00:31:15.313 14:01:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:15.313 14:01:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:15.313 [2024-07-11 14:01:17.505679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:16.251 14:01:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:16.251 14:01:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.251 14:01:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:16.251 14:01:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:16.251 14:01:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.251 14:01:18 -- common/autotest_common.sh@10 -- # set +x 00:31:16.251 14:01:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:16.251 [2024-07-11 14:01:18.539175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:17.190 [2024-07-11 14:01:19.563191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:17.190 [2024-07-11 14:01:19.563238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x98f590 with addr=10.0.0.2, port=4420 00:31:17.190 [2024-07-11 14:01:19.563254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98f590 is same with the state(5) to be set 00:31:17.190 [2024-07-11 14:01:19.563277] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:17.190 [2024-07-11 14:01:19.563287] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:17.190 [2024-07-11 14:01:19.563296] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:17.190 [2024-07-11 14:01:19.563310] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:17.190 [2024-07-11 14:01:19.563676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98f590 (9): Bad file descriptor 00:31:17.190 [2024-07-11 14:01:19.563701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.190 [2024-07-11 14:01:19.563724] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:17.190 [2024-07-11 14:01:19.563749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.190 [2024-07-11 14:01:19.563762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.190 [2024-07-11 14:01:19.563774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.190 [2024-07-11 14:01:19.563784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.190 [2024-07-11 14:01:19.563794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.190 [2024-07-11 14:01:19.563804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.190 [2024-07-11 14:01:19.563814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.190 [2024-07-11 14:01:19.563823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.190 [2024-07-11 14:01:19.563833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:17.190 [2024-07-11 14:01:19.563842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:17.190 [2024-07-11 14:01:19.563851] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:17.190 [2024-07-11 14:01:19.564281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98f9a0 (9): Bad file descriptor 00:31:17.190 [2024-07-11 14:01:19.565294] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:17.190 [2024-07-11 14:01:19.565307] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:17.190 14:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.190 14:01:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:17.190 14:01:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:18.570 14:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.570 14:01:20 -- common/autotest_common.sh@10 -- # set +x 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:18.570 14:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.570 14:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:18.570 14:01:20 -- common/autotest_common.sh@10 -- # set +x 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:18.570 14:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:18.570 14:01:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:19.507 [2024-07-11 14:01:21.617363] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:19.507 [2024-07-11 14:01:21.617381] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:19.507 [2024-07-11 14:01:21.617395] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:19.507 [2024-07-11 14:01:21.703655] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:19.507 14:01:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:19.507 14:01:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.507 14:01:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:19.507 14:01:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:19.507 14:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:19.507 14:01:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:19.507 14:01:21 -- common/autotest_common.sh@10 -- # set +x 00:31:19.507 14:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:19.507 14:01:21 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:19.507 14:01:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:19.507 [2024-07-11 14:01:21.882173] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:19.507 [2024-07-11 14:01:21.882207] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:19.507 [2024-07-11 14:01:21.882223] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:19.507 [2024-07-11 14:01:21.882236] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:19.507 [2024-07-11 14:01:21.882242] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:19.507 [2024-07-11 14:01:21.886267] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9d3910 was disconnected and freed. delete nvme_qpair. 00:31:20.445 14:01:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:20.445 14:01:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:20.445 14:01:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:20.445 14:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.445 14:01:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:20.445 14:01:22 -- common/autotest_common.sh@10 -- # set +x 00:31:20.445 14:01:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:20.445 14:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.445 14:01:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:20.445 14:01:22 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:20.445 14:01:22 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1769359 00:31:20.445 14:01:22 -- common/autotest_common.sh@926 -- # '[' -z 1769359 ']' 00:31:20.445 14:01:22 -- common/autotest_common.sh@930 -- # kill -0 1769359 00:31:20.445 14:01:22 -- common/autotest_common.sh@931 -- # uname 00:31:20.445 14:01:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:20.445 14:01:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1769359 00:31:20.705 14:01:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:20.705 14:01:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:20.705 14:01:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1769359' 00:31:20.705 killing process with pid 1769359 00:31:20.705 14:01:22 -- common/autotest_common.sh@945 -- # kill 1769359 00:31:20.705 14:01:22 -- common/autotest_common.sh@950 -- # wait 1769359 00:31:20.705 14:01:23 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:20.705 14:01:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:20.705 14:01:23 -- nvmf/common.sh@116 -- # sync 00:31:20.705 14:01:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:20.705 14:01:23 -- nvmf/common.sh@119 -- # set +e 00:31:20.705 14:01:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:20.705 14:01:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:20.705 rmmod nvme_tcp 00:31:20.705 rmmod nvme_fabrics 00:31:20.705 rmmod nvme_keyring 00:31:20.705 14:01:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:20.705 14:01:23 -- nvmf/common.sh@123 -- # set -e 00:31:20.705 14:01:23 -- nvmf/common.sh@124 -- # return 0 00:31:20.705 14:01:23 -- nvmf/common.sh@477 -- # '[' -n 1769113 ']' 00:31:20.705 14:01:23 -- nvmf/common.sh@478 -- # killprocess 1769113 00:31:20.705 14:01:23 -- common/autotest_common.sh@926 -- # '[' -z 1769113 ']' 00:31:20.705 14:01:23 -- common/autotest_common.sh@930 -- # kill -0 1769113 00:31:20.705 14:01:23 -- common/autotest_common.sh@931 -- # uname 00:31:20.964 14:01:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:20.964 14:01:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1769113 00:31:20.964 14:01:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:20.964 14:01:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:20.964 14:01:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1769113' 00:31:20.964 killing process with pid 1769113 00:31:20.964 14:01:23 -- common/autotest_common.sh@945 -- # kill 1769113 00:31:20.964 14:01:23 -- common/autotest_common.sh@950 -- # wait 1769113 00:31:20.964 14:01:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:20.964 14:01:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:20.964 14:01:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:20.964 14:01:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.964 14:01:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:20.964 14:01:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.964 14:01:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:20.964 14:01:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.501 14:01:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:23.501 00:31:23.501 real 0m21.296s 00:31:23.501 user 0m26.248s 00:31:23.501 sys 0m5.127s 00:31:23.501 14:01:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:23.501 14:01:25 -- common/autotest_common.sh@10 -- # set +x 00:31:23.501 ************************************ 00:31:23.501 END TEST nvmf_discovery_remove_ifc 00:31:23.501 ************************************ 00:31:23.501 14:01:25 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:31:23.501 14:01:25 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:23.501 14:01:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:23.501 14:01:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:23.501 14:01:25 -- common/autotest_common.sh@10 -- # set +x 00:31:23.501 ************************************ 00:31:23.502 START TEST nvmf_digest 00:31:23.502 ************************************ 00:31:23.502 14:01:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:23.502 * Looking for test storage... 00:31:23.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:23.502 14:01:25 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.502 14:01:25 -- nvmf/common.sh@7 -- # uname -s 00:31:23.502 14:01:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.502 14:01:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.502 14:01:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.502 14:01:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.502 14:01:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.502 14:01:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.502 14:01:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.502 14:01:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.502 14:01:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.502 14:01:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.502 14:01:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:23.502 14:01:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:23.502 14:01:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.502 14:01:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.502 14:01:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.502 14:01:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.502 14:01:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.502 14:01:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.502 14:01:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.502 14:01:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.502 14:01:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.502 14:01:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.502 14:01:25 -- paths/export.sh@5 -- # export PATH 00:31:23.502 14:01:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.502 14:01:25 -- nvmf/common.sh@46 -- # : 0 00:31:23.502 14:01:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:23.502 14:01:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:23.502 14:01:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:23.502 14:01:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.502 14:01:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.502 14:01:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:23.502 14:01:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:23.502 14:01:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:23.502 14:01:25 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:23.502 14:01:25 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:23.502 14:01:25 -- host/digest.sh@16 -- # runtime=2 00:31:23.502 14:01:25 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:31:23.502 14:01:25 -- host/digest.sh@132 -- # nvmftestinit 00:31:23.502 14:01:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:23.502 14:01:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.502 14:01:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:23.502 14:01:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:23.502 14:01:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:23.502 14:01:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.502 14:01:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:23.502 14:01:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.502 14:01:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:23.502 14:01:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:23.502 14:01:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:23.502 14:01:25 -- common/autotest_common.sh@10 -- # set +x 00:31:28.781 14:01:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:28.781 14:01:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:28.781 14:01:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:28.781 14:01:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:28.781 14:01:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:28.781 14:01:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:28.781 14:01:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:28.781 14:01:30 -- nvmf/common.sh@294 -- # net_devs=() 00:31:28.781 14:01:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:28.781 14:01:30 -- nvmf/common.sh@295 -- # e810=() 00:31:28.781 14:01:30 -- nvmf/common.sh@295 -- # local -ga e810 00:31:28.781 14:01:30 -- nvmf/common.sh@296 -- # x722=() 00:31:28.781 14:01:30 -- nvmf/common.sh@296 -- # local -ga x722 00:31:28.781 14:01:30 -- nvmf/common.sh@297 -- # mlx=() 00:31:28.781 14:01:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:28.781 14:01:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.781 14:01:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.781 14:01:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.781 14:01:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.781 14:01:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.781 14:01:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.781 14:01:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.781 14:01:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.781 14:01:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.781 14:01:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.781 14:01:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.781 14:01:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:28.782 14:01:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:28.782 14:01:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:28.782 14:01:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:28.782 14:01:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:28.782 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:28.782 14:01:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:28.782 14:01:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:28.782 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:28.782 14:01:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:28.782 14:01:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:28.782 14:01:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.782 14:01:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:28.782 14:01:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.782 14:01:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:28.782 Found net devices under 0000:86:00.0: cvl_0_0 00:31:28.782 14:01:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.782 14:01:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:28.782 14:01:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.782 14:01:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:28.782 14:01:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.782 14:01:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:28.782 Found net devices under 0000:86:00.1: cvl_0_1 00:31:28.782 14:01:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.782 14:01:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:28.782 14:01:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:28.782 14:01:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:28.782 14:01:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.782 14:01:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.782 14:01:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.782 14:01:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:28.782 14:01:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.782 14:01:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.782 14:01:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:28.782 14:01:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.782 14:01:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.782 14:01:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:28.782 14:01:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:28.782 14:01:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.782 14:01:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.782 14:01:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.782 14:01:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.782 14:01:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:28.782 14:01:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.782 14:01:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.782 14:01:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.782 14:01:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:28.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:31:28.782 00:31:28.782 --- 10.0.0.2 ping statistics --- 00:31:28.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.782 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:31:28.782 14:01:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:31:28.782 00:31:28.782 --- 10.0.0.1 ping statistics --- 00:31:28.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.782 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:31:28.782 14:01:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.782 14:01:30 -- nvmf/common.sh@410 -- # return 0 00:31:28.782 14:01:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:28.782 14:01:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.782 14:01:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:28.782 14:01:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.782 14:01:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:28.782 14:01:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:28.782 14:01:30 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:28.782 14:01:30 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:31:28.782 14:01:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:28.782 14:01:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:28.782 14:01:30 -- common/autotest_common.sh@10 -- # set +x 00:31:28.782 ************************************ 00:31:28.782 START TEST nvmf_digest_clean 00:31:28.782 ************************************ 00:31:28.782 14:01:30 -- common/autotest_common.sh@1104 -- # run_digest 00:31:28.782 14:01:30 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:31:28.782 14:01:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:28.782 14:01:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:28.782 14:01:30 -- common/autotest_common.sh@10 -- # set +x 00:31:28.782 14:01:30 -- nvmf/common.sh@469 -- # nvmfpid=1774859 00:31:28.782 14:01:30 -- nvmf/common.sh@470 -- # waitforlisten 1774859 00:31:28.782 14:01:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:28.782 14:01:30 -- common/autotest_common.sh@819 -- # '[' -z 1774859 ']' 00:31:28.782 14:01:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.782 14:01:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:28.782 14:01:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.782 14:01:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:28.782 14:01:30 -- common/autotest_common.sh@10 -- # set +x 00:31:28.782 [2024-07-11 14:01:30.755838] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:28.782 [2024-07-11 14:01:30.755879] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.782 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.782 [2024-07-11 14:01:30.811506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.782 [2024-07-11 14:01:30.849931] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:28.782 [2024-07-11 14:01:30.850036] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.782 [2024-07-11 14:01:30.850048] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.782 [2024-07-11 14:01:30.850055] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.782 [2024-07-11 14:01:30.850069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.782 14:01:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:28.782 14:01:30 -- common/autotest_common.sh@852 -- # return 0 00:31:28.782 14:01:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:28.782 14:01:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:28.782 14:01:30 -- common/autotest_common.sh@10 -- # set +x 00:31:28.782 14:01:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.782 14:01:30 -- host/digest.sh@120 -- # common_target_config 00:31:28.782 14:01:30 -- host/digest.sh@43 -- # rpc_cmd 00:31:28.782 14:01:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:28.782 14:01:30 -- common/autotest_common.sh@10 -- # set +x 00:31:28.782 null0 00:31:28.782 [2024-07-11 14:01:31.001569] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.782 [2024-07-11 14:01:31.025786] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.782 14:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:28.782 14:01:31 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:31:28.782 14:01:31 -- host/digest.sh@77 -- # local rw bs qd 00:31:28.782 14:01:31 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:28.782 14:01:31 -- host/digest.sh@80 -- # rw=randread 00:31:28.782 14:01:31 -- host/digest.sh@80 -- # bs=4096 00:31:28.782 14:01:31 -- host/digest.sh@80 -- # qd=128 00:31:28.782 14:01:31 -- host/digest.sh@82 -- # bperfpid=1774878 00:31:28.782 14:01:31 -- host/digest.sh@83 -- # waitforlisten 1774878 /var/tmp/bperf.sock 00:31:28.782 14:01:31 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:28.782 14:01:31 -- common/autotest_common.sh@819 -- # '[' -z 1774878 ']' 00:31:28.782 14:01:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:28.782 14:01:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:28.782 14:01:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:28.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:28.782 14:01:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:28.782 14:01:31 -- common/autotest_common.sh@10 -- # set +x 00:31:28.782 [2024-07-11 14:01:31.075171] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:28.782 [2024-07-11 14:01:31.075210] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774878 ] 00:31:28.782 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.782 [2024-07-11 14:01:31.128688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.782 [2024-07-11 14:01:31.166868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.782 14:01:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:28.782 14:01:31 -- common/autotest_common.sh@852 -- # return 0 00:31:28.783 14:01:31 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:28.783 14:01:31 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:28.783 14:01:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:29.042 14:01:31 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:29.042 14:01:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:29.611 nvme0n1 00:31:29.611 14:01:31 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:29.611 14:01:31 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:29.611 Running I/O for 2 seconds... 00:31:31.516 00:31:31.516 Latency(us) 00:31:31.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.516 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:31.516 nvme0n1 : 2.00 28234.68 110.29 0.00 0.00 4529.05 1823.61 9346.00 00:31:31.516 =================================================================================================================== 00:31:31.516 Total : 28234.68 110.29 0.00 0.00 4529.05 1823.61 9346.00 00:31:31.516 0 00:31:31.516 14:01:33 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:31.516 14:01:33 -- host/digest.sh@92 -- # get_accel_stats 00:31:31.516 14:01:33 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:31.516 14:01:33 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:31.516 | select(.opcode=="crc32c") 00:31:31.516 | "\(.module_name) \(.executed)"' 00:31:31.516 14:01:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:31.775 14:01:34 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:31.775 14:01:34 -- host/digest.sh@93 -- # exp_module=software 00:31:31.775 14:01:34 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:31.775 14:01:34 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:31.775 14:01:34 -- host/digest.sh@97 -- # killprocess 1774878 00:31:31.775 14:01:34 -- common/autotest_common.sh@926 -- # '[' -z 1774878 ']' 00:31:31.775 14:01:34 -- common/autotest_common.sh@930 -- # kill -0 1774878 00:31:31.775 14:01:34 -- common/autotest_common.sh@931 -- # uname 00:31:31.775 14:01:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:31.775 14:01:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1774878 00:31:31.775 14:01:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:31.775 14:01:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:31.775 14:01:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1774878' 00:31:31.775 killing process with pid 1774878 00:31:31.775 14:01:34 -- common/autotest_common.sh@945 -- # kill 1774878 00:31:31.775 Received shutdown signal, test time was about 2.000000 seconds 00:31:31.775 00:31:31.775 Latency(us) 00:31:31.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.775 =================================================================================================================== 00:31:31.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:31.775 14:01:34 -- common/autotest_common.sh@950 -- # wait 1774878 00:31:32.034 14:01:34 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:31:32.034 14:01:34 -- host/digest.sh@77 -- # local rw bs qd 00:31:32.034 14:01:34 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:32.034 14:01:34 -- host/digest.sh@80 -- # rw=randread 00:31:32.034 14:01:34 -- host/digest.sh@80 -- # bs=131072 00:31:32.034 14:01:34 -- host/digest.sh@80 -- # qd=16 00:31:32.034 14:01:34 -- host/digest.sh@82 -- # bperfpid=1775359 00:31:32.034 14:01:34 -- host/digest.sh@83 -- # waitforlisten 1775359 /var/tmp/bperf.sock 00:31:32.034 14:01:34 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:32.034 14:01:34 -- common/autotest_common.sh@819 -- # '[' -z 1775359 ']' 00:31:32.034 14:01:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:32.034 14:01:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:32.034 14:01:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:32.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:32.034 14:01:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:32.034 14:01:34 -- common/autotest_common.sh@10 -- # set +x 00:31:32.034 [2024-07-11 14:01:34.369952] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:32.034 [2024-07-11 14:01:34.370000] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775359 ] 00:31:32.034 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:32.034 Zero copy mechanism will not be used. 00:31:32.034 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.034 [2024-07-11 14:01:34.425791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.034 [2024-07-11 14:01:34.461957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.293 14:01:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:32.293 14:01:34 -- common/autotest_common.sh@852 -- # return 0 00:31:32.293 14:01:34 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:32.293 14:01:34 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:32.293 14:01:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:32.293 14:01:34 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:32.293 14:01:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:32.861 nvme0n1 00:31:32.861 14:01:35 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:32.861 14:01:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:32.861 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:32.861 Zero copy mechanism will not be used. 00:31:32.861 Running I/O for 2 seconds... 00:31:34.795 00:31:34.795 Latency(us) 00:31:34.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.795 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:34.795 nvme0n1 : 2.00 4548.87 568.61 0.00 0.00 3514.61 765.77 6325.65 00:31:34.795 =================================================================================================================== 00:31:34.795 Total : 4548.87 568.61 0.00 0.00 3514.61 765.77 6325.65 00:31:34.795 0 00:31:34.795 14:01:37 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:34.795 14:01:37 -- host/digest.sh@92 -- # get_accel_stats 00:31:34.795 14:01:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:34.795 14:01:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:34.795 | select(.opcode=="crc32c") 00:31:34.795 | "\(.module_name) \(.executed)"' 00:31:34.795 14:01:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:35.054 14:01:37 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:35.054 14:01:37 -- host/digest.sh@93 -- # exp_module=software 00:31:35.054 14:01:37 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:35.054 14:01:37 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:35.054 14:01:37 -- host/digest.sh@97 -- # killprocess 1775359 00:31:35.054 14:01:37 -- common/autotest_common.sh@926 -- # '[' -z 1775359 ']' 00:31:35.054 14:01:37 -- common/autotest_common.sh@930 -- # kill -0 1775359 00:31:35.054 14:01:37 -- common/autotest_common.sh@931 -- # uname 00:31:35.054 14:01:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:35.054 14:01:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1775359 00:31:35.054 14:01:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:35.054 14:01:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:35.054 14:01:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1775359' 00:31:35.054 killing process with pid 1775359 00:31:35.054 14:01:37 -- common/autotest_common.sh@945 -- # kill 1775359 00:31:35.054 Received shutdown signal, test time was about 2.000000 seconds 00:31:35.054 00:31:35.054 Latency(us) 00:31:35.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.054 =================================================================================================================== 00:31:35.054 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:35.054 14:01:37 -- common/autotest_common.sh@950 -- # wait 1775359 00:31:35.313 14:01:37 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:31:35.313 14:01:37 -- host/digest.sh@77 -- # local rw bs qd 00:31:35.313 14:01:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:35.313 14:01:37 -- host/digest.sh@80 -- # rw=randwrite 00:31:35.313 14:01:37 -- host/digest.sh@80 -- # bs=4096 00:31:35.313 14:01:37 -- host/digest.sh@80 -- # qd=128 00:31:35.313 14:01:37 -- host/digest.sh@82 -- # bperfpid=1776054 00:31:35.313 14:01:37 -- host/digest.sh@83 -- # waitforlisten 1776054 /var/tmp/bperf.sock 00:31:35.313 14:01:37 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:35.313 14:01:37 -- common/autotest_common.sh@819 -- # '[' -z 1776054 ']' 00:31:35.313 14:01:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:35.313 14:01:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:35.313 14:01:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:35.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:35.313 14:01:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:35.313 14:01:37 -- common/autotest_common.sh@10 -- # set +x 00:31:35.313 [2024-07-11 14:01:37.615894] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:35.313 [2024-07-11 14:01:37.615940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776054 ] 00:31:35.313 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.313 [2024-07-11 14:01:37.672135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.313 [2024-07-11 14:01:37.710305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.313 14:01:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:35.313 14:01:37 -- common/autotest_common.sh@852 -- # return 0 00:31:35.313 14:01:37 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:35.313 14:01:37 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:35.313 14:01:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:35.571 14:01:37 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:35.571 14:01:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:36.139 nvme0n1 00:31:36.139 14:01:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:36.139 14:01:38 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:36.139 Running I/O for 2 seconds... 00:31:38.042 00:31:38.042 Latency(us) 00:31:38.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.042 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:38.042 nvme0n1 : 2.00 28949.33 113.08 0.00 0.00 4416.58 2450.48 16298.52 00:31:38.042 =================================================================================================================== 00:31:38.043 Total : 28949.33 113.08 0.00 0.00 4416.58 2450.48 16298.52 00:31:38.043 0 00:31:38.043 14:01:40 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:38.043 14:01:40 -- host/digest.sh@92 -- # get_accel_stats 00:31:38.043 14:01:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:38.043 14:01:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:38.043 | select(.opcode=="crc32c") 00:31:38.043 | "\(.module_name) \(.executed)"' 00:31:38.043 14:01:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:38.301 14:01:40 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:38.301 14:01:40 -- host/digest.sh@93 -- # exp_module=software 00:31:38.301 14:01:40 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:38.301 14:01:40 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:38.301 14:01:40 -- host/digest.sh@97 -- # killprocess 1776054 00:31:38.301 14:01:40 -- common/autotest_common.sh@926 -- # '[' -z 1776054 ']' 00:31:38.301 14:01:40 -- common/autotest_common.sh@930 -- # kill -0 1776054 00:31:38.301 14:01:40 -- common/autotest_common.sh@931 -- # uname 00:31:38.301 14:01:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:38.301 14:01:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1776054 00:31:38.301 14:01:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:38.301 14:01:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:38.301 14:01:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1776054' 00:31:38.301 killing process with pid 1776054 00:31:38.301 14:01:40 -- common/autotest_common.sh@945 -- # kill 1776054 00:31:38.301 Received shutdown signal, test time was about 2.000000 seconds 00:31:38.301 00:31:38.302 Latency(us) 00:31:38.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:38.302 =================================================================================================================== 00:31:38.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:38.302 14:01:40 -- common/autotest_common.sh@950 -- # wait 1776054 00:31:38.562 14:01:40 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:31:38.562 14:01:40 -- host/digest.sh@77 -- # local rw bs qd 00:31:38.562 14:01:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:38.562 14:01:40 -- host/digest.sh@80 -- # rw=randwrite 00:31:38.562 14:01:40 -- host/digest.sh@80 -- # bs=131072 00:31:38.562 14:01:40 -- host/digest.sh@80 -- # qd=16 00:31:38.562 14:01:40 -- host/digest.sh@82 -- # bperfpid=1776543 00:31:38.562 14:01:40 -- host/digest.sh@83 -- # waitforlisten 1776543 /var/tmp/bperf.sock 00:31:38.562 14:01:40 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:38.562 14:01:40 -- common/autotest_common.sh@819 -- # '[' -z 1776543 ']' 00:31:38.562 14:01:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:38.562 14:01:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:38.562 14:01:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:38.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:38.563 14:01:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:38.563 14:01:40 -- common/autotest_common.sh@10 -- # set +x 00:31:38.563 [2024-07-11 14:01:40.849362] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:38.563 [2024-07-11 14:01:40.849408] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776543 ] 00:31:38.563 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:38.563 Zero copy mechanism will not be used. 00:31:38.563 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.563 [2024-07-11 14:01:40.904892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.563 [2024-07-11 14:01:40.939097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.563 14:01:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:38.563 14:01:40 -- common/autotest_common.sh@852 -- # return 0 00:31:38.563 14:01:40 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:38.563 14:01:40 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:38.563 14:01:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:38.823 14:01:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:38.823 14:01:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:39.392 nvme0n1 00:31:39.392 14:01:41 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:39.392 14:01:41 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:39.392 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:39.392 Zero copy mechanism will not be used. 00:31:39.392 Running I/O for 2 seconds... 00:31:41.295 00:31:41.295 Latency(us) 00:31:41.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.295 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:41.295 nvme0n1 : 2.00 6957.34 869.67 0.00 0.00 2295.57 1638.40 7978.30 00:31:41.295 =================================================================================================================== 00:31:41.295 Total : 6957.34 869.67 0.00 0.00 2295.57 1638.40 7978.30 00:31:41.295 0 00:31:41.295 14:01:43 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:31:41.295 14:01:43 -- host/digest.sh@92 -- # get_accel_stats 00:31:41.295 14:01:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:41.295 14:01:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:41.295 | select(.opcode=="crc32c") 00:31:41.295 | "\(.module_name) \(.executed)"' 00:31:41.295 14:01:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:41.553 14:01:43 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:31:41.553 14:01:43 -- host/digest.sh@93 -- # exp_module=software 00:31:41.553 14:01:43 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:31:41.553 14:01:43 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:41.553 14:01:43 -- host/digest.sh@97 -- # killprocess 1776543 00:31:41.554 14:01:43 -- common/autotest_common.sh@926 -- # '[' -z 1776543 ']' 00:31:41.554 14:01:43 -- common/autotest_common.sh@930 -- # kill -0 1776543 00:31:41.554 14:01:43 -- common/autotest_common.sh@931 -- # uname 00:31:41.554 14:01:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:41.554 14:01:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1776543 00:31:41.554 14:01:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:41.554 14:01:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:41.554 14:01:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1776543' 00:31:41.554 killing process with pid 1776543 00:31:41.554 14:01:43 -- common/autotest_common.sh@945 -- # kill 1776543 00:31:41.554 Received shutdown signal, test time was about 2.000000 seconds 00:31:41.554 00:31:41.554 Latency(us) 00:31:41.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.554 =================================================================================================================== 00:31:41.554 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:41.554 14:01:43 -- common/autotest_common.sh@950 -- # wait 1776543 00:31:41.811 14:01:44 -- host/digest.sh@126 -- # killprocess 1774859 00:31:41.811 14:01:44 -- common/autotest_common.sh@926 -- # '[' -z 1774859 ']' 00:31:41.811 14:01:44 -- common/autotest_common.sh@930 -- # kill -0 1774859 00:31:41.811 14:01:44 -- common/autotest_common.sh@931 -- # uname 00:31:41.811 14:01:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:41.811 14:01:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1774859 00:31:41.811 14:01:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:41.811 14:01:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:41.811 14:01:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1774859' 00:31:41.811 killing process with pid 1774859 00:31:41.811 14:01:44 -- common/autotest_common.sh@945 -- # kill 1774859 00:31:41.811 14:01:44 -- common/autotest_common.sh@950 -- # wait 1774859 00:31:42.070 00:31:42.070 real 0m13.561s 00:31:42.070 user 0m25.486s 00:31:42.070 sys 0m4.398s 00:31:42.070 14:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:42.070 14:01:44 -- common/autotest_common.sh@10 -- # set +x 00:31:42.070 ************************************ 00:31:42.070 END TEST nvmf_digest_clean 00:31:42.070 ************************************ 00:31:42.070 14:01:44 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:31:42.070 14:01:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:42.070 14:01:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:42.070 14:01:44 -- common/autotest_common.sh@10 -- # set +x 00:31:42.070 ************************************ 00:31:42.070 START TEST nvmf_digest_error 00:31:42.070 ************************************ 00:31:42.070 14:01:44 -- common/autotest_common.sh@1104 -- # run_digest_error 00:31:42.070 14:01:44 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:31:42.070 14:01:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:42.070 14:01:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:42.070 14:01:44 -- common/autotest_common.sh@10 -- # set +x 00:31:42.070 14:01:44 -- nvmf/common.sh@469 -- # nvmfpid=1777050 00:31:42.070 14:01:44 -- nvmf/common.sh@470 -- # waitforlisten 1777050 00:31:42.070 14:01:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:42.070 14:01:44 -- common/autotest_common.sh@819 -- # '[' -z 1777050 ']' 00:31:42.070 14:01:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:42.070 14:01:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:42.070 14:01:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:42.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:42.070 14:01:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:42.070 14:01:44 -- common/autotest_common.sh@10 -- # set +x 00:31:42.070 [2024-07-11 14:01:44.358679] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:42.070 [2024-07-11 14:01:44.358730] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:42.070 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.070 [2024-07-11 14:01:44.416018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.070 [2024-07-11 14:01:44.453993] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:42.070 [2024-07-11 14:01:44.454100] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:42.070 [2024-07-11 14:01:44.454107] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:42.070 [2024-07-11 14:01:44.454113] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:42.070 [2024-07-11 14:01:44.454128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.070 14:01:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:42.070 14:01:44 -- common/autotest_common.sh@852 -- # return 0 00:31:42.070 14:01:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:42.070 14:01:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:42.070 14:01:44 -- common/autotest_common.sh@10 -- # set +x 00:31:42.070 14:01:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:42.070 14:01:44 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:42.070 14:01:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.070 14:01:44 -- common/autotest_common.sh@10 -- # set +x 00:31:42.329 [2024-07-11 14:01:44.526573] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:42.329 14:01:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.329 14:01:44 -- host/digest.sh@104 -- # common_target_config 00:31:42.329 14:01:44 -- host/digest.sh@43 -- # rpc_cmd 00:31:42.329 14:01:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.329 14:01:44 -- common/autotest_common.sh@10 -- # set +x 00:31:42.329 null0 00:31:42.329 [2024-07-11 14:01:44.613706] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.329 [2024-07-11 14:01:44.637887] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.329 14:01:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.329 14:01:44 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:31:42.329 14:01:44 -- host/digest.sh@54 -- # local rw bs qd 00:31:42.329 14:01:44 -- host/digest.sh@56 -- # rw=randread 00:31:42.329 14:01:44 -- host/digest.sh@56 -- # bs=4096 00:31:42.329 14:01:44 -- host/digest.sh@56 -- # qd=128 00:31:42.329 14:01:44 -- host/digest.sh@58 -- # bperfpid=1777226 00:31:42.329 14:01:44 -- host/digest.sh@60 -- # waitforlisten 1777226 /var/tmp/bperf.sock 00:31:42.329 14:01:44 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:42.329 14:01:44 -- common/autotest_common.sh@819 -- # '[' -z 1777226 ']' 00:31:42.329 14:01:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:42.329 14:01:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:42.329 14:01:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:42.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:42.329 14:01:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:42.329 14:01:44 -- common/autotest_common.sh@10 -- # set +x 00:31:42.329 [2024-07-11 14:01:44.685525] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:42.329 [2024-07-11 14:01:44.685565] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777226 ] 00:31:42.329 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.329 [2024-07-11 14:01:44.739554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.329 [2024-07-11 14:01:44.777866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.268 14:01:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:43.268 14:01:45 -- common/autotest_common.sh@852 -- # return 0 00:31:43.268 14:01:45 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:43.268 14:01:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:43.268 14:01:45 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:43.268 14:01:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:43.268 14:01:45 -- common/autotest_common.sh@10 -- # set +x 00:31:43.268 14:01:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:43.268 14:01:45 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:43.268 14:01:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:43.527 nvme0n1 00:31:43.527 14:01:45 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:43.527 14:01:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:43.527 14:01:45 -- common/autotest_common.sh@10 -- # set +x 00:31:43.527 14:01:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:43.527 14:01:45 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:43.527 14:01:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:43.785 Running I/O for 2 seconds... 00:31:43.785 [2024-07-11 14:01:46.040798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.785 [2024-07-11 14:01:46.040831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.785 [2024-07-11 14:01:46.040841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.785 [2024-07-11 14:01:46.052305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.785 [2024-07-11 14:01:46.052330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.785 [2024-07-11 14:01:46.052340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.785 [2024-07-11 14:01:46.063450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.785 [2024-07-11 14:01:46.063473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.785 [2024-07-11 14:01:46.063481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.785 [2024-07-11 14:01:46.074302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.785 [2024-07-11 14:01:46.074324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.785 [2024-07-11 14:01:46.074332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.785 [2024-07-11 14:01:46.085557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.785 [2024-07-11 14:01:46.085578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.785 [2024-07-11 14:01:46.085586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.785 [2024-07-11 14:01:46.095985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.785 [2024-07-11 14:01:46.096006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.785 [2024-07-11 14:01:46.096014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.785 [2024-07-11 14:01:46.104421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.785 [2024-07-11 14:01:46.104442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.785 [2024-07-11 14:01:46.104450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.785 [2024-07-11 14:01:46.115834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.785 [2024-07-11 14:01:46.115855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.115863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.127470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.127495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.127503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.139671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.139692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.139700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.149816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.149837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.149845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.158966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.158986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.158994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.168135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.168155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.168169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.176367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.176388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.176396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.184918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.184939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.184947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.194145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.194172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.194181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.202529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.202549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.202557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.211264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.211284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.211292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.219827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.219848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.219856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.228950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.228970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.228978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:43.786 [2024-07-11 14:01:46.237627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:43.786 [2024-07-11 14:01:46.237647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.786 [2024-07-11 14:01:46.237656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.246251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.246273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.246282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.254715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.254736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.254744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.263772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.263792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.263800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.272497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.272518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.272526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.281097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.281117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.281129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.290418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.290439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.290447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.298949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.298969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.298977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.307654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.307674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.307682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.316742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.316763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.316771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.324993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.325013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.325021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.334272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.334293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.334301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.342952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.342971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.342979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.351358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.351378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.351386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.360519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.360539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.360547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.368873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.368893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.368901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.377923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.045 [2024-07-11 14:01:46.377944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.045 [2024-07-11 14:01:46.377953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.045 [2024-07-11 14:01:46.386409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.386431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.386439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.395578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.395598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.395607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.404153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.404178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.404186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.412899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.412918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.412927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.421351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.421371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.421379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.430473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.430493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.430504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.439036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.439056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.439064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.447990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.448010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.448018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.456556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.456576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.456584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.465380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.465400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.465408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.473713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.473733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.473741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.482943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.482963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.482971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.491414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.491434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.491442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.046 [2024-07-11 14:01:46.499973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.046 [2024-07-11 14:01:46.499993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.046 [2024-07-11 14:01:46.500001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.508674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.508698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.508707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.517852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.517872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.517881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.526291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.526311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.526319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.534752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.534773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.534781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.543310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.543330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.543338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.552734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.552754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.552762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.561313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.561332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.561340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.570096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.570116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.570124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.578648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.578669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.578677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.587846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.587866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.587874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.596207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.596226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.596234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.604877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.604897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.604905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.614010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.305 [2024-07-11 14:01:46.614030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.305 [2024-07-11 14:01:46.614039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.305 [2024-07-11 14:01:46.622604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.622624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.622632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.631109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.631129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.631137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.639697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.639717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.639725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.648751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.648771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.648780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.657440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.657460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.657472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.665635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.665655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.665663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.674811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.674832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.674839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.683571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.683591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.683599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.692083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.692103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.692111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.701332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.701352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.701360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.709802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.709822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.709830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.718483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.718503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.718511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.726965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.726985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.726993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.735522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.735545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.735553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.744674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.744694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.744702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.306 [2024-07-11 14:01:46.753433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.306 [2024-07-11 14:01:46.753453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.306 [2024-07-11 14:01:46.753461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.565 [2024-07-11 14:01:46.761933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.565 [2024-07-11 14:01:46.761955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.565 [2024-07-11 14:01:46.761964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.565 [2024-07-11 14:01:46.770558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.565 [2024-07-11 14:01:46.770578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.565 [2024-07-11 14:01:46.770586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.565 [2024-07-11 14:01:46.779555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.565 [2024-07-11 14:01:46.779579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.565 [2024-07-11 14:01:46.779590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.565 [2024-07-11 14:01:46.788325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.565 [2024-07-11 14:01:46.788347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.565 [2024-07-11 14:01:46.788355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.565 [2024-07-11 14:01:46.796759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.565 [2024-07-11 14:01:46.796780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.565 [2024-07-11 14:01:46.796789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.565 [2024-07-11 14:01:46.805714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.565 [2024-07-11 14:01:46.805734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.565 [2024-07-11 14:01:46.805743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.565 [2024-07-11 14:01:46.814661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.565 [2024-07-11 14:01:46.814684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.565 [2024-07-11 14:01:46.814694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.823102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.823123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.823132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.831878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.831899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.831907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.841312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.841333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.841342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.849897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.849917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.849926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.858351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.858373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.858381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.867498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.867519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.867527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.876296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.876316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.876327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.884643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.884664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.884675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.893865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.893885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.893894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.902344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.902364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.902372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.911128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.911149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.911157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.919670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.919690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.919698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.928227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.928247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.928255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.937387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.937407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.937415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.946020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.946039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.946048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.954618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.954638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.954646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.963209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.963230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.963238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.972328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.972347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.972355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.980953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.980973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.980981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.989288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.989308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.989316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:46.997795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:46.997815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:46.997823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:47.007249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:47.007270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:47.007278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.566 [2024-07-11 14:01:47.015836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.566 [2024-07-11 14:01:47.015857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.566 [2024-07-11 14:01:47.015865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.826 [2024-07-11 14:01:47.024564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.826 [2024-07-11 14:01:47.024585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.826 [2024-07-11 14:01:47.024593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.826 [2024-07-11 14:01:47.033676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.826 [2024-07-11 14:01:47.033696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.826 [2024-07-11 14:01:47.033707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.826 [2024-07-11 14:01:47.042363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.826 [2024-07-11 14:01:47.042383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.826 [2024-07-11 14:01:47.042391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.826 [2024-07-11 14:01:47.050865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.826 [2024-07-11 14:01:47.050886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.826 [2024-07-11 14:01:47.050894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.826 [2024-07-11 14:01:47.059980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.826 [2024-07-11 14:01:47.060000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.826 [2024-07-11 14:01:47.060008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.826 [2024-07-11 14:01:47.068494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.826 [2024-07-11 14:01:47.068514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.826 [2024-07-11 14:01:47.068522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.826 [2024-07-11 14:01:47.077333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.826 [2024-07-11 14:01:47.077354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.826 [2024-07-11 14:01:47.077362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.826 [2024-07-11 14:01:47.085775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.826 [2024-07-11 14:01:47.085794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.826 [2024-07-11 14:01:47.085802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.826 [2024-07-11 14:01:47.094982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.826 [2024-07-11 14:01:47.095002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.826 [2024-07-11 14:01:47.095010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.826 [2024-07-11 14:01:47.103710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.826 [2024-07-11 14:01:47.103731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.826 [2024-07-11 14:01:47.103739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.826 [2024-07-11 14:01:47.111953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.826 [2024-07-11 14:01:47.111977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.826 [2024-07-11 14:01:47.111985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.826 [2024-07-11 14:01:47.120632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.120652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.120660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.129611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.129631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.129640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.138496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.138517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.138525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.147036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.147055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.147063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.155413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.155433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.155441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.164901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.164922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.164929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.173124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.173144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.173152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.181776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.181796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.181804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.190816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.190837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.190846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.199666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.199687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.199696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.208028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.208050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.208058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.216653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.216674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.216681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.225934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.225956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.225964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.234395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.234416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.234424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.242897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.242920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.242928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.252311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.252335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.252344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.261077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.261099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.261110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.269589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.269610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.269618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.827 [2024-07-11 14:01:47.278128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:44.827 [2024-07-11 14:01:47.278149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.827 [2024-07-11 14:01:47.278158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.287639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.287660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.287668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.296229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.296250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.296258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.304572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.304593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.304602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.313716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.313738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.313746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.322653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.322674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.322682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.331131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.331152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.331168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.339901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.339929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.339938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.348938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.348960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.348968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.357411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.357432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.357440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.366094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.366113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.366122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.374677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.374698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.374706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.383882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.383903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.383911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.392392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.392412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.392421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.401080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.401100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.401108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.409538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.409559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.409567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.418809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.418829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.418838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.427459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.427481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.427489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.435783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.435805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.435813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.087 [2024-07-11 14:01:47.445038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.087 [2024-07-11 14:01:47.445059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.087 [2024-07-11 14:01:47.445067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.088 [2024-07-11 14:01:47.453583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.088 [2024-07-11 14:01:47.453603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.088 [2024-07-11 14:01:47.453610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.088 [2024-07-11 14:01:47.462232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.088 [2024-07-11 14:01:47.462252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.088 [2024-07-11 14:01:47.462261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.088 [2024-07-11 14:01:47.470802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.088 [2024-07-11 14:01:47.470822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.088 [2024-07-11 14:01:47.470830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.088 [2024-07-11 14:01:47.479975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.088 [2024-07-11 14:01:47.479996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.088 [2024-07-11 14:01:47.480004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.088 [2024-07-11 14:01:47.488534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.088 [2024-07-11 14:01:47.488554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.088 [2024-07-11 14:01:47.488567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.088 [2024-07-11 14:01:47.497066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.088 [2024-07-11 14:01:47.497087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.088 [2024-07-11 14:01:47.497095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.088 [2024-07-11 14:01:47.506370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.088 [2024-07-11 14:01:47.506391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.088 [2024-07-11 14:01:47.506399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.088 [2024-07-11 14:01:47.514811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.088 [2024-07-11 14:01:47.514832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.088 [2024-07-11 14:01:47.514840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.088 [2024-07-11 14:01:47.523224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.088 [2024-07-11 14:01:47.523245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.088 [2024-07-11 14:01:47.523253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.088 [2024-07-11 14:01:47.531781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.088 [2024-07-11 14:01:47.531801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.088 [2024-07-11 14:01:47.531809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.088 [2024-07-11 14:01:47.541264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.088 [2024-07-11 14:01:47.541285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.088 [2024-07-11 14:01:47.541293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.550197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.550218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.550227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.559003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.559024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.559032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.568475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.568496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.568504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.577390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.577411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.577419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.586142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.586169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.586179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.595078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.595099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.595107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.603588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.603609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.603617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.612628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.612648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.612656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.621363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.621384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.621392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.629903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.629923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.629931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.638479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.638500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.638512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.647585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.647606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.647614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.656050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.656070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.656078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.664566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.664587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.664595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.673698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.673718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.673726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.682424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.682444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.682452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.690373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.690393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.690401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.701590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.701610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.701618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.714004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.714024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.714032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.726429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.726454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.726462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.739069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.739091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.739099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.751796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.751816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.751824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.764270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.764290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.764298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.776171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.776192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.776200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.788936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.788957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.788965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.348 [2024-07-11 14:01:47.801684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.348 [2024-07-11 14:01:47.801706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.348 [2024-07-11 14:01:47.801714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.608 [2024-07-11 14:01:47.813998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.608 [2024-07-11 14:01:47.814020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.814029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.822155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.822180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.822189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.834090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.834111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.834120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.845502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.845522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.845530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.857026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.857047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.857056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.869930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.869951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.869959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.881982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.882003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.882011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.889754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.889774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.889782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.901270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.901291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.901299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.913554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.913577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.913586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.926227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.926248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.926259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.938441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.938462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.938470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.951704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.951726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.951734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.963353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.963374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.963382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.971028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.971049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.971057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.983220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.983240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.983248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:47.995547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:47.995567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:47.995575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:48.008261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:48.008282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:48.008290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:48.019397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:48.019418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:48.019426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 [2024-07-11 14:01:48.030904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefdc50) 00:31:45.609 [2024-07-11 14:01:48.030928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.609 [2024-07-11 14:01:48.030936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:45.609 00:31:45.609 Latency(us) 00:31:45.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.609 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:45.609 nvme0n1 : 2.00 27548.38 107.61 0.00 0.00 4642.71 1880.60 13335.15 00:31:45.609 =================================================================================================================== 00:31:45.609 Total : 27548.38 107.61 0.00 0.00 4642.71 1880.60 13335.15 00:31:45.609 0 00:31:45.609 14:01:48 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:45.609 14:01:48 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:45.609 14:01:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:45.609 14:01:48 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:45.609 | .driver_specific 00:31:45.609 | .nvme_error 00:31:45.609 | .status_code 00:31:45.609 | .command_transient_transport_error' 00:31:45.869 14:01:48 -- host/digest.sh@71 -- # (( 216 > 0 )) 00:31:45.869 14:01:48 -- host/digest.sh@73 -- # killprocess 1777226 00:31:45.869 14:01:48 -- common/autotest_common.sh@926 -- # '[' -z 1777226 ']' 00:31:45.869 14:01:48 -- common/autotest_common.sh@930 -- # kill -0 1777226 00:31:45.869 14:01:48 -- common/autotest_common.sh@931 -- # uname 00:31:45.869 14:01:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:45.869 14:01:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1777226 00:31:45.869 14:01:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:45.869 14:01:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:45.869 14:01:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1777226' 00:31:45.869 killing process with pid 1777226 00:31:45.869 14:01:48 -- common/autotest_common.sh@945 -- # kill 1777226 00:31:45.869 Received shutdown signal, test time was about 2.000000 seconds 00:31:45.870 00:31:45.870 Latency(us) 00:31:45.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.870 =================================================================================================================== 00:31:45.870 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:45.870 14:01:48 -- common/autotest_common.sh@950 -- # wait 1777226 00:31:46.129 14:01:48 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:31:46.129 14:01:48 -- host/digest.sh@54 -- # local rw bs qd 00:31:46.129 14:01:48 -- host/digest.sh@56 -- # rw=randread 00:31:46.129 14:01:48 -- host/digest.sh@56 -- # bs=131072 00:31:46.129 14:01:48 -- host/digest.sh@56 -- # qd=16 00:31:46.129 14:01:48 -- host/digest.sh@58 -- # bperfpid=1777780 00:31:46.129 14:01:48 -- host/digest.sh@60 -- # waitforlisten 1777780 /var/tmp/bperf.sock 00:31:46.129 14:01:48 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:46.129 14:01:48 -- common/autotest_common.sh@819 -- # '[' -z 1777780 ']' 00:31:46.129 14:01:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:46.129 14:01:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:46.129 14:01:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:46.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:46.129 14:01:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:46.129 14:01:48 -- common/autotest_common.sh@10 -- # set +x 00:31:46.129 [2024-07-11 14:01:48.481270] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:46.129 [2024-07-11 14:01:48.481320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777780 ] 00:31:46.129 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:46.129 Zero copy mechanism will not be used. 00:31:46.129 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.129 [2024-07-11 14:01:48.533843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.129 [2024-07-11 14:01:48.572303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.068 14:01:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:47.068 14:01:49 -- common/autotest_common.sh@852 -- # return 0 00:31:47.068 14:01:49 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:47.068 14:01:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:47.068 14:01:49 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:47.068 14:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.068 14:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:47.068 14:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.068 14:01:49 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:47.068 14:01:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:47.328 nvme0n1 00:31:47.328 14:01:49 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:47.328 14:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.328 14:01:49 -- common/autotest_common.sh@10 -- # set +x 00:31:47.328 14:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.328 14:01:49 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:47.328 14:01:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:47.328 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:47.328 Zero copy mechanism will not be used. 00:31:47.328 Running I/O for 2 seconds... 00:31:47.588 [2024-07-11 14:01:49.798934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.798966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.798976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.808594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.808618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.808627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.817628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.817649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.817657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.825561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.825583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.825591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.832756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.832777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.832786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.839754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.839776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.839784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.846397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.846420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.846428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.852882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.852903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.852911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.858883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.858909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.858917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.865599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.865619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.865627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.874784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.874803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.874812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.883526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.883546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.883554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.891472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.891492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.891504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.899820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.899840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.899848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.908454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.908474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.908482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.916226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.916245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.916253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.925264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.925284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.925292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.933572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.933593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.933601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.941294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.941314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.941322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.948358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.948378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.948386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.955164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.955185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.955193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.961570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.961591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.961599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.967628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.967648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.967656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.974049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.974069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.974082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.981080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.981100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.981108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.990389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.588 [2024-07-11 14:01:49.990408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.588 [2024-07-11 14:01:49.990416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.588 [2024-07-11 14:01:49.999046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.589 [2024-07-11 14:01:49.999066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.589 [2024-07-11 14:01:49.999074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.589 [2024-07-11 14:01:50.006852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.589 [2024-07-11 14:01:50.006874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.589 [2024-07-11 14:01:50.006881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.589 [2024-07-11 14:01:50.016293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.589 [2024-07-11 14:01:50.016314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.589 [2024-07-11 14:01:50.016322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.589 [2024-07-11 14:01:50.024998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.589 [2024-07-11 14:01:50.025020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.589 [2024-07-11 14:01:50.025032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.589 [2024-07-11 14:01:50.032760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.589 [2024-07-11 14:01:50.032781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.589 [2024-07-11 14:01:50.032789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.589 [2024-07-11 14:01:50.040443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.589 [2024-07-11 14:01:50.040464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.589 [2024-07-11 14:01:50.040473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.046917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.046938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.046946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.052046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.052066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.052075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.057233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.057254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.057262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.062074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.062096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.062104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.067125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.067146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.067154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.073478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.073500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.073509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.078825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.078852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.078860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.083905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.083926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.083936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.089071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.089094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.089103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.095127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.095150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.095164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.100908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.100931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.100940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.106014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.106037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.106046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.110569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.110592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.110601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.115242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.849 [2024-07-11 14:01:50.115264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.849 [2024-07-11 14:01:50.115272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.849 [2024-07-11 14:01:50.120237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.120261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.120271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.125513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.125535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.125545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.131707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.131730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.131738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.137510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.137533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.137541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.143008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.143030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.143038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.148720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.148743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.148752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.155030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.155052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.155060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.161239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.161261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.161269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.166937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.166958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.166966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.172527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.172547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.172561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.178193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.178213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.178221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.183810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.183833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.183841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.189337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.189358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.189366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.193678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.193700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.193708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.198191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.198213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.198222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.203298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.203318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.203326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.208549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.208570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.208578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.212816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.212838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.212846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.217150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.217183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.217191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.221309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.221331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.221339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.225817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.225838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.225847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.230837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.230858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.850 [2024-07-11 14:01:50.230866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.850 [2024-07-11 14:01:50.235867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.850 [2024-07-11 14:01:50.235889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.235896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.241126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.241148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.241156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.246242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.246263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.246271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.251485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.251506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.251513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.256678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.256700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.256710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.261939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.261961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.261968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.267082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.267103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.267112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.272224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.272245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.272253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.277359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.277380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.277388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.282445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.282467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.282475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.287549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.287571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.287580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.292657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.292679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.292688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.297927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.297949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.297958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:47.851 [2024-07-11 14:01:50.303193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:47.851 [2024-07-11 14:01:50.303218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.851 [2024-07-11 14:01:50.303227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.111 [2024-07-11 14:01:50.308426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.111 [2024-07-11 14:01:50.308448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.111 [2024-07-11 14:01:50.308457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.111 [2024-07-11 14:01:50.313547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.111 [2024-07-11 14:01:50.313569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.313577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.318703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.318724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.318732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.323950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.323971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.323979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.329091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.329113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.329121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.334252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.334273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.334281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.339395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.339417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.339426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.344633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.344656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.344664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.350017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.350041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.350049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.355437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.355460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.355469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.360611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.360633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.360641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.365773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.365795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.365804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.370983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.371005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.371013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.376205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.376226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.376234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.381269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.381291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.381299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.386398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.386420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.386428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.391574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.391596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.391607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.396700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.396722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.396730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.401963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.401985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.401992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.407262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.407283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.407291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.412683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.412706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.412714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.418098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.418119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.418127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.423173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.423194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.423202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.428349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.428370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.428378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.433388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.433410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.433417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.438702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.438728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.438736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.443967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.443988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.443996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.449050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.449072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.449080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.454197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.454218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.112 [2024-07-11 14:01:50.454226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.112 [2024-07-11 14:01:50.459296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.112 [2024-07-11 14:01:50.459317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.459325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.464239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.464260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.464268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.469464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.469486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.469495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.474614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.474635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.474643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.479739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.479760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.479768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.484773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.484795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.484803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.489821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.489842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.489850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.494821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.494842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.494850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.499821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.499842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.499851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.504836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.504857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.504865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.510131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.510152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.510167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.514838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.514860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.514868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.519883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.519906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.519914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.524965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.524986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.524997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.530015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.530036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.530045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.535130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.535151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.535164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.539802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.539822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.539829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.544769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.544791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.544799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.549662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.549685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.549693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.554570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.554591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.554599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.559706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.559730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.559739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.113 [2024-07-11 14:01:50.564777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.113 [2024-07-11 14:01:50.564799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.113 [2024-07-11 14:01:50.564807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.374 [2024-07-11 14:01:50.569758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.374 [2024-07-11 14:01:50.569783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.374 [2024-07-11 14:01:50.569792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.374 [2024-07-11 14:01:50.574807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.374 [2024-07-11 14:01:50.574828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.374 [2024-07-11 14:01:50.574837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.374 [2024-07-11 14:01:50.579731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.374 [2024-07-11 14:01:50.579753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.374 [2024-07-11 14:01:50.579761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.374 [2024-07-11 14:01:50.584797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.374 [2024-07-11 14:01:50.584818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.374 [2024-07-11 14:01:50.584826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.374 [2024-07-11 14:01:50.589731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.374 [2024-07-11 14:01:50.589753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.374 [2024-07-11 14:01:50.589761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.374 [2024-07-11 14:01:50.594774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.374 [2024-07-11 14:01:50.594795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.374 [2024-07-11 14:01:50.594803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.374 [2024-07-11 14:01:50.599775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.374 [2024-07-11 14:01:50.599797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.374 [2024-07-11 14:01:50.599805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.604816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.604838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.604846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.609919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.609940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.609952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.614962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.614984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.614992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.620020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.620042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.620050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.625048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.625069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.625077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.630006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.630027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.630035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.634997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.635018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.635026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.639970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.639991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.639999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.644977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.644997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.645005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.649945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.649966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.649974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.654955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.654979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.654987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.659956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.659977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.659985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.665010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.665031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.665039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.670073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.670095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.670103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.675119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.675140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.675148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.680077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.680098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.680106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.685081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.685102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.685109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.690045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.690066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.690074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.695104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.695125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.695134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.700084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.700105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.700113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.705136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.705157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.705170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.710128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.710149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.710157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.715113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.715134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.715142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.720086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.720106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.720114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.725069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.375 [2024-07-11 14:01:50.725088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.375 [2024-07-11 14:01:50.725096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.375 [2024-07-11 14:01:50.730009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.730030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.730038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.734991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.735012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.735020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.739938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.739959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.739970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.744921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.744941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.744949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.749979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.750000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.750008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.754922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.754943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.754951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.759860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.759881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.759888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.764807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.764828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.764835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.769854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.769875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.769883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.774939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.774958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.774967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.780023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.780044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.780051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.785057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.785081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.785089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.790065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.790086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.790094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.795043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.795065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.795072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.800044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.800065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.800072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.805048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.805070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.805077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.810146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.810173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.810182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.815167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.815188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.815196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.820168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.820188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.820196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.376 [2024-07-11 14:01:50.825246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.376 [2024-07-11 14:01:50.825267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.376 [2024-07-11 14:01:50.825275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.637 [2024-07-11 14:01:50.830363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.637 [2024-07-11 14:01:50.830384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.637 [2024-07-11 14:01:50.830392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.637 [2024-07-11 14:01:50.835445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.637 [2024-07-11 14:01:50.835465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.637 [2024-07-11 14:01:50.835473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.637 [2024-07-11 14:01:50.840543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.637 [2024-07-11 14:01:50.840564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.637 [2024-07-11 14:01:50.840572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.637 [2024-07-11 14:01:50.845608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.637 [2024-07-11 14:01:50.845629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.637 [2024-07-11 14:01:50.845637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.637 [2024-07-11 14:01:50.850648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.637 [2024-07-11 14:01:50.850670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.637 [2024-07-11 14:01:50.850678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.637 [2024-07-11 14:01:50.855709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.637 [2024-07-11 14:01:50.855732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.637 [2024-07-11 14:01:50.855740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.637 [2024-07-11 14:01:50.860715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.637 [2024-07-11 14:01:50.860737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.637 [2024-07-11 14:01:50.860745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.637 [2024-07-11 14:01:50.865681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.637 [2024-07-11 14:01:50.865703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.637 [2024-07-11 14:01:50.865711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.637 [2024-07-11 14:01:50.870698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.870719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.870731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.875759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.875780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.875788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.880812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.880832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.880840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.885821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.885842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.885850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.890831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.890852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.890860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.895877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.895897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.895905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.900884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.900905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.900913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.905942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.905962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.905970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.910984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.911005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.911013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.915988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.916009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.916016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.920992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.921013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.921021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.926008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.926029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.926037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.931043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.931064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.931071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.936044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.936064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.936072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.941016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.941038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.941046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.946068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.946089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.946096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.951080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.951100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.951108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.956060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.956080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.956091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.961055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.961076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.961083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.966113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.966133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.966141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.971083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.971104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.971112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.976133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.976153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.976166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.981153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.981182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.981190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.986205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.986226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.986234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.991285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.991306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.991314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:50.996252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:50.996273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:50.996280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:51.001227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:51.001252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:51.001260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:51.006248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:51.006269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:51.006277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:51.011243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.638 [2024-07-11 14:01:51.011264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.638 [2024-07-11 14:01:51.011272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.638 [2024-07-11 14:01:51.016227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.016247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.016255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.021215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.021235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.021243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.026243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.026263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.026271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.031316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.031337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.031345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.036242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.036263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.036270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.041248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.041268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.041276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.046253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.046275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.046282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.051306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.051327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.051334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.056346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.056367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.056374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.061298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.061320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.061328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.066332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.066353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.066362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.071181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.071201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.071209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.075300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.075320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.075329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.079406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.079427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.079434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.083578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.083598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.083610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.639 [2024-07-11 14:01:51.087693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.639 [2024-07-11 14:01:51.087714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.639 [2024-07-11 14:01:51.087722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.900 [2024-07-11 14:01:51.091823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.900 [2024-07-11 14:01:51.091846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.900 [2024-07-11 14:01:51.091855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.900 [2024-07-11 14:01:51.096005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.900 [2024-07-11 14:01:51.096027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.900 [2024-07-11 14:01:51.096035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.100110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.100131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.100140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.104259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.104279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.104288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.108643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.108663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.108671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.112803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.112823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.112832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.117024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.117045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.117053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.121139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.121170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.121178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.125328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.125349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.125357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.129473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.129494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.129502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.133664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.133685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.133692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.137781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.137801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.137809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.141913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.141934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.141942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.146060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.146080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.146088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.150191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.150212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.150220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.154352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.154373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.154381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.158506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.158526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.158534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.162649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.162670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.162678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.166824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.166844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.166852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.170944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.170965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.170972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.175104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.175125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.175132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.179301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.179321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.179329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.183458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.183479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.183487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.187686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.187707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.187715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.191908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.191929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.191941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.196036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.196056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.196064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.200179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.200200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.200210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.204335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.204355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.204363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.208515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.208535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.208543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.212660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.212681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.901 [2024-07-11 14:01:51.212689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.901 [2024-07-11 14:01:51.216839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.901 [2024-07-11 14:01:51.216860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.216867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.221083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.221112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.225247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.225268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.225276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.229437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.229457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.229465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.233633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.233653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.233663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.237787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.237808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.237816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.241890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.241911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.241919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.246112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.246132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.246140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.250249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.250269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.250277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.254424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.254445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.254453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.258554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.258574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.258582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.262688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.262708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.262719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.266783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.266804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.266811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.270907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.270927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.270935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.275053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.275074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.275082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.279197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.279217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.279225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.283366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.283386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.283394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.287484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.287504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.287512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.291628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.291648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.291656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.295766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.295786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.295794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.299883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.299907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.299915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.303992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.304012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.304020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.308098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.308119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.308126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.312250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.312271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.312279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.316407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.316427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.316435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.320542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.320562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.320569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.324635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.324656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.324664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.328777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.328797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.328805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.332928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.332948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.332957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.902 [2024-07-11 14:01:51.337036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.902 [2024-07-11 14:01:51.337056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.902 [2024-07-11 14:01:51.337064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:48.903 [2024-07-11 14:01:51.341145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.903 [2024-07-11 14:01:51.341171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.903 [2024-07-11 14:01:51.341179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:48.903 [2024-07-11 14:01:51.345348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.903 [2024-07-11 14:01:51.345368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.903 [2024-07-11 14:01:51.345377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:48.903 [2024-07-11 14:01:51.349503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.903 [2024-07-11 14:01:51.349524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.903 [2024-07-11 14:01:51.349532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:48.903 [2024-07-11 14:01:51.353734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:48.903 [2024-07-11 14:01:51.353756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:48.903 [2024-07-11 14:01:51.353764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.357906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.357928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.357938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.362065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.362086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.362094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.366190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.366210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.366218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.370313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.370334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.370345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.374454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.374475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.374483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.378607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.378627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.378635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.382703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.382724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.382732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.386936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.386957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.386965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.391085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.391106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.391114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.395214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.395235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.395242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.399357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.399378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.399386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.403522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.403543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.403550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.407666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.163 [2024-07-11 14:01:51.407692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.163 [2024-07-11 14:01:51.407700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.163 [2024-07-11 14:01:51.411839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.411860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.411868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.416013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.416034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.416041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.420225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.420245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.420254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.424401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.424421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.424428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.428522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.428542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.428550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.432652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.432672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.432680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.436783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.436803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.436811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.440970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.440991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.440999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.445197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.445217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.445225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.449363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.449384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.449392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.453516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.453537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.453545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.457658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.457678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.457686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.462128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.462148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.462156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.468067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.468088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.468096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.475435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.475457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.475466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.480532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.480553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.480560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.486090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.486111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.486122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.491052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.491074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.491082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.496141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.496169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.496178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.501156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.501184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.501192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.506206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.506228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.506235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.510874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.510896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.510904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.515877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.515900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.515908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.520937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.520959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.520967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.525881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.525904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.525912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.530848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.530873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.530881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.535859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.535881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.535889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.540931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.540952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.540961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.546054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.164 [2024-07-11 14:01:51.546078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.164 [2024-07-11 14:01:51.546086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.164 [2024-07-11 14:01:51.551990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.552011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.552019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.165 [2024-07-11 14:01:51.557141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.557168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.557176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.165 [2024-07-11 14:01:51.562273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.562294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.562302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.165 [2024-07-11 14:01:51.567316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.567338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.567346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.165 [2024-07-11 14:01:51.572344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.572365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.572376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.165 [2024-07-11 14:01:51.577416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.577438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.577446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.165 [2024-07-11 14:01:51.582395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.582417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.582424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.165 [2024-07-11 14:01:51.587705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.587727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.587735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.165 [2024-07-11 14:01:51.592958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.592980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.592987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.165 [2024-07-11 14:01:51.597913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.597935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.597942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.165 [2024-07-11 14:01:51.602978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.602999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.603007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.165 [2024-07-11 14:01:51.607994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.608017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.608025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.165 [2024-07-11 14:01:51.612972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.165 [2024-07-11 14:01:51.612994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.165 [2024-07-11 14:01:51.613001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.426 [2024-07-11 14:01:51.618048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.426 [2024-07-11 14:01:51.618074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.426 [2024-07-11 14:01:51.618082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.426 [2024-07-11 14:01:51.623267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.426 [2024-07-11 14:01:51.623290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.426 [2024-07-11 14:01:51.623298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.426 [2024-07-11 14:01:51.628361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.426 [2024-07-11 14:01:51.628383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.426 [2024-07-11 14:01:51.628391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.426 [2024-07-11 14:01:51.633411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.426 [2024-07-11 14:01:51.633432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.426 [2024-07-11 14:01:51.633440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.426 [2024-07-11 14:01:51.638491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.426 [2024-07-11 14:01:51.638512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.426 [2024-07-11 14:01:51.638520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.426 [2024-07-11 14:01:51.643559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.426 [2024-07-11 14:01:51.643581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.426 [2024-07-11 14:01:51.643589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.426 [2024-07-11 14:01:51.648622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.426 [2024-07-11 14:01:51.648644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.426 [2024-07-11 14:01:51.648651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.426 [2024-07-11 14:01:51.653716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.426 [2024-07-11 14:01:51.653738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.426 [2024-07-11 14:01:51.653746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.426 [2024-07-11 14:01:51.658790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.426 [2024-07-11 14:01:51.658812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.426 [2024-07-11 14:01:51.658820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.426 [2024-07-11 14:01:51.663795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.426 [2024-07-11 14:01:51.663816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.426 [2024-07-11 14:01:51.663824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.426 [2024-07-11 14:01:51.668951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.426 [2024-07-11 14:01:51.668972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.426 [2024-07-11 14:01:51.668980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.426 [2024-07-11 14:01:51.673881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.426 [2024-07-11 14:01:51.673903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.426 [2024-07-11 14:01:51.673910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.678923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.678944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.678953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.683892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.683913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.683921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.690435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.690456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.690465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.697553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.697575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.697584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.704969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.704992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.705000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.712696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.712718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.712729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.720715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.720737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.720745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.728733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.728756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.728764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.736915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.736939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.736948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.745209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.745231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.745239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.753434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.753457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.753466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.762935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.762958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.762966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.771964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.771987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.771995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.781125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.781147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.781156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:49.427 [2024-07-11 14:01:51.789950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210f5c0) 00:31:49.427 [2024-07-11 14:01:51.789976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:49.427 [2024-07-11 14:01:51.789985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:49.427 00:31:49.427 Latency(us) 00:31:49.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.427 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:49.427 nvme0n1 : 2.00 5955.26 744.41 0.00 0.00 2683.41 566.32 9915.88 00:31:49.427 =================================================================================================================== 00:31:49.427 Total : 5955.26 744.41 0.00 0.00 2683.41 566.32 9915.88 00:31:49.427 0 00:31:49.427 14:01:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:49.427 14:01:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:49.427 14:01:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:49.427 | .driver_specific 00:31:49.427 | .nvme_error 00:31:49.427 | .status_code 00:31:49.427 | .command_transient_transport_error' 00:31:49.427 14:01:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:49.687 14:01:51 -- host/digest.sh@71 -- # (( 384 > 0 )) 00:31:49.687 14:01:51 -- host/digest.sh@73 -- # killprocess 1777780 00:31:49.687 14:01:51 -- common/autotest_common.sh@926 -- # '[' -z 1777780 ']' 00:31:49.687 14:01:51 -- common/autotest_common.sh@930 -- # kill -0 1777780 00:31:49.687 14:01:51 -- common/autotest_common.sh@931 -- # uname 00:31:49.687 14:01:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:49.687 14:01:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1777780 00:31:49.687 14:01:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:49.687 14:01:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:49.687 14:01:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1777780' 00:31:49.687 killing process with pid 1777780 00:31:49.687 14:01:52 -- common/autotest_common.sh@945 -- # kill 1777780 00:31:49.687 Received shutdown signal, test time was about 2.000000 seconds 00:31:49.687 00:31:49.687 Latency(us) 00:31:49.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.687 =================================================================================================================== 00:31:49.687 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:49.687 14:01:52 -- common/autotest_common.sh@950 -- # wait 1777780 00:31:49.946 14:01:52 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:31:49.946 14:01:52 -- host/digest.sh@54 -- # local rw bs qd 00:31:49.946 14:01:52 -- host/digest.sh@56 -- # rw=randwrite 00:31:49.947 14:01:52 -- host/digest.sh@56 -- # bs=4096 00:31:49.947 14:01:52 -- host/digest.sh@56 -- # qd=128 00:31:49.947 14:01:52 -- host/digest.sh@58 -- # bperfpid=1778484 00:31:49.947 14:01:52 -- host/digest.sh@60 -- # waitforlisten 1778484 /var/tmp/bperf.sock 00:31:49.947 14:01:52 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:49.947 14:01:52 -- common/autotest_common.sh@819 -- # '[' -z 1778484 ']' 00:31:49.947 14:01:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:49.947 14:01:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:49.947 14:01:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:49.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:49.947 14:01:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:49.947 14:01:52 -- common/autotest_common.sh@10 -- # set +x 00:31:49.947 [2024-07-11 14:01:52.252209] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:49.947 [2024-07-11 14:01:52.252262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778484 ] 00:31:49.947 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.947 [2024-07-11 14:01:52.307104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.947 [2024-07-11 14:01:52.341353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.882 14:01:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:50.882 14:01:53 -- common/autotest_common.sh@852 -- # return 0 00:31:50.882 14:01:53 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:50.882 14:01:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:50.882 14:01:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:50.882 14:01:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:50.882 14:01:53 -- common/autotest_common.sh@10 -- # set +x 00:31:50.882 14:01:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:50.882 14:01:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:50.882 14:01:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:51.140 nvme0n1 00:31:51.140 14:01:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:51.140 14:01:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:51.140 14:01:53 -- common/autotest_common.sh@10 -- # set +x 00:31:51.140 14:01:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:51.140 14:01:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:51.140 14:01:53 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:51.140 Running I/O for 2 seconds... 00:31:51.140 [2024-07-11 14:01:53.565748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ee5c8 00:31:51.140 [2024-07-11 14:01:53.566601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.140 [2024-07-11 14:01:53.566631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:51.140 [2024-07-11 14:01:53.574824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190edd58 00:31:51.140 [2024-07-11 14:01:53.575684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.140 [2024-07-11 14:01:53.575707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:51.140 [2024-07-11 14:01:53.583882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7818 00:31:51.140 [2024-07-11 14:01:53.584788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.140 [2024-07-11 14:01:53.584810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:51.140 [2024-07-11 14:01:53.592845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7818 00:31:51.140 [2024-07-11 14:01:53.593761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.140 [2024-07-11 14:01:53.593781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:51.398 [2024-07-11 14:01:53.601905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7818 00:31:51.399 [2024-07-11 14:01:53.602829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.602849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.610946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7818 00:31:51.399 [2024-07-11 14:01:53.611879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.611898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.619569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f96f8 00:31:51.399 [2024-07-11 14:01:53.620100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.620120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.628599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f35f0 00:31:51.399 [2024-07-11 14:01:53.629363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.629382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.637561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f8a50 00:31:51.399 [2024-07-11 14:01:53.638302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.638321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.646477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f31b8 00:31:51.399 [2024-07-11 14:01:53.647241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.647259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.655546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f7da8 00:31:51.399 [2024-07-11 14:01:53.655802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.655821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.664799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f1868 00:31:51.399 [2024-07-11 14:01:53.665538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.665558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.673748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f8e88 00:31:51.399 [2024-07-11 14:01:53.674491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.674510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.682691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ee5c8 00:31:51.399 [2024-07-11 14:01:53.683442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.683461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.691574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eff18 00:31:51.399 [2024-07-11 14:01:53.692346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.692365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.700482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e27f0 00:31:51.399 [2024-07-11 14:01:53.701259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.701278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.709416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.710201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.710219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.718311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.719101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.719119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.727147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.727953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.727972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.736077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.736894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.736913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.744967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.745786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.745805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.753871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.754700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.754722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.762846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.763672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.763690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.771754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.772599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.772617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.780696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.781539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.781557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.789594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.790402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.790420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.798707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.799499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.799518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.807637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.808454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.808474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.816592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.817401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.817420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.825484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.826296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.826314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.834437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.835262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.835280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.843337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.844170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.844189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:51.399 [2024-07-11 14:01:53.852288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.399 [2024-07-11 14:01:53.853125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.399 [2024-07-11 14:01:53.853144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.861393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.668 [2024-07-11 14:01:53.862265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.862285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.870596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3060 00:31:51.668 [2024-07-11 14:01:53.871484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.871503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.879663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e8088 00:31:51.668 [2024-07-11 14:01:53.880522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.880540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.888515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ef6a8 00:31:51.668 [2024-07-11 14:01:53.889360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.889379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.897330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ecc78 00:31:51.668 [2024-07-11 14:01:53.898211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.898230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.906244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f7da8 00:31:51.668 [2024-07-11 14:01:53.907091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.907111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.915118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f9f68 00:31:51.668 [2024-07-11 14:01:53.915950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.915969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.925101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f9f68 00:31:51.668 [2024-07-11 14:01:53.925719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.925739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.933437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ebfd0 00:31:51.668 [2024-07-11 14:01:53.934879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.934897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.941753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f1868 00:31:51.668 [2024-07-11 14:01:53.942360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.942380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.950802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f6020 00:31:51.668 [2024-07-11 14:01:53.951570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.951589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.959708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f7538 00:31:51.668 [2024-07-11 14:01:53.960508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.960529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.968604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e49b0 00:31:51.668 [2024-07-11 14:01:53.969406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.969425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.977523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190edd58 00:31:51.668 [2024-07-11 14:01:53.978341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.978360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.986431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e38d0 00:31:51.668 [2024-07-11 14:01:53.987251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.987273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:53.995304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:53.996130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:53.996149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:54.004191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:54.005027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:54.005046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:54.013283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:54.014114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:54.014133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:54.022172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:54.023035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:54.023053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:54.031054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:54.031923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:54.031941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:54.039943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:54.040845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:54.040863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:54.049010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:54.049897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:54.049915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:54.057872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:54.058750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:54.058769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:54.066807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:54.067718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:54.067741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:54.075721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:54.076637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:54.076655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:54.084610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:54.085534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:54.085552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:54.093482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:54.094420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.668 [2024-07-11 14:01:54.094438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:51.668 [2024-07-11 14:01:54.102394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.668 [2024-07-11 14:01:54.103334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.669 [2024-07-11 14:01:54.103353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:51.669 [2024-07-11 14:01:54.111283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.669 [2024-07-11 14:01:54.112251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.669 [2024-07-11 14:01:54.112271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.120420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.962 [2024-07-11 14:01:54.121427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.121447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.129543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.962 [2024-07-11 14:01:54.130515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.130534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.138502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e99d8 00:31:51.962 [2024-07-11 14:01:54.139482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.139502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.147466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ef6a8 00:31:51.962 [2024-07-11 14:01:54.148470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.148489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.156343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f6cc8 00:31:51.962 [2024-07-11 14:01:54.157383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.157403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.165241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7c50 00:31:51.962 [2024-07-11 14:01:54.166053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.166072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.174170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7c50 00:31:51.962 [2024-07-11 14:01:54.174990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.175008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.183028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7c50 00:31:51.962 [2024-07-11 14:01:54.183874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.183893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.191910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7c50 00:31:51.962 [2024-07-11 14:01:54.192751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.192770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.200809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7c50 00:31:51.962 [2024-07-11 14:01:54.201664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.201683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.209684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7c50 00:31:51.962 [2024-07-11 14:01:54.210542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.210561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.218550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7c50 00:31:51.962 [2024-07-11 14:01:54.219560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.219580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.227432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7c50 00:31:51.962 [2024-07-11 14:01:54.228313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.228332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.236280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3498 00:31:51.962 [2024-07-11 14:01:54.237216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.237235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.245176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ee190 00:31:51.962 [2024-07-11 14:01:54.246174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.246193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:51.962 [2024-07-11 14:01:54.254045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f46d0 00:31:51.962 [2024-07-11 14:01:54.255049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.962 [2024-07-11 14:01:54.255069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.262892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f96f8 00:31:51.963 [2024-07-11 14:01:54.263878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.263896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.271817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7818 00:31:51.963 [2024-07-11 14:01:54.272730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.272749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.280681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7818 00:31:51.963 [2024-07-11 14:01:54.281857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.281876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.289428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f0bc0 00:31:51.963 [2024-07-11 14:01:54.289751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.289769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.298584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f1430 00:31:51.963 [2024-07-11 14:01:54.299407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.299430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.307617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eee38 00:31:51.963 [2024-07-11 14:01:54.308507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.308533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.316602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:51.963 [2024-07-11 14:01:54.317419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.317442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.325487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:51.963 [2024-07-11 14:01:54.326313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.326333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.334375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:51.963 [2024-07-11 14:01:54.335221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.335240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.343296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:51.963 [2024-07-11 14:01:54.344136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.344157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.352171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:51.963 [2024-07-11 14:01:54.353015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.353034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.361061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:51.963 [2024-07-11 14:01:54.361841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.361861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.369974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:51.963 [2024-07-11 14:01:54.370752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.370771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.379154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:51.963 [2024-07-11 14:01:54.379954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.379974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.388078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:51.963 [2024-07-11 14:01:54.388907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.388926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.397097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:51.963 [2024-07-11 14:01:54.397914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.397933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.405996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:51.963 [2024-07-11 14:01:54.406908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.406927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:51.963 [2024-07-11 14:01:54.415061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:51.963 [2024-07-11 14:01:54.416008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:51.963 [2024-07-11 14:01:54.416027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:52.221 [2024-07-11 14:01:54.424236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:52.221 [2024-07-11 14:01:54.425167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.425186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.433246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:52.222 [2024-07-11 14:01:54.434154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.434177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.442187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:52.222 [2024-07-11 14:01:54.443044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.443063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.451051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:52.222 [2024-07-11 14:01:54.451914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.451935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.459928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:52.222 [2024-07-11 14:01:54.460801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.460822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.468860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:52.222 [2024-07-11 14:01:54.469836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.469855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.477747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:52.222 [2024-07-11 14:01:54.478643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.478663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.486666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:52.222 [2024-07-11 14:01:54.487566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.487586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.495578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190fb048 00:31:52.222 [2024-07-11 14:01:54.496486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.496506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.504476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190fb048 00:31:52.222 [2024-07-11 14:01:54.505395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.505414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.513418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:52.222 [2024-07-11 14:01:54.514343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.514362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.522328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f9b30 00:31:52.222 [2024-07-11 14:01:54.523270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.523290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.531226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190efae0 00:31:52.222 [2024-07-11 14:01:54.532172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.532194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.540154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f1ca0 00:31:52.222 [2024-07-11 14:01:54.541110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.541130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.549044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e4578 00:31:52.222 [2024-07-11 14:01:54.550037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.550056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.558154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f7100 00:31:52.222 [2024-07-11 14:01:54.559129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.559149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.567144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f31b8 00:31:52.222 [2024-07-11 14:01:54.567996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.568015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.576203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ef270 00:31:52.222 [2024-07-11 14:01:54.576899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.576918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.585090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ef270 00:31:52.222 [2024-07-11 14:01:54.586130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.586149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.594017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ef270 00:31:52.222 [2024-07-11 14:01:54.594941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.594959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.602873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e5a90 00:31:52.222 [2024-07-11 14:01:54.603789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.603808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.611766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e9e10 00:31:52.222 [2024-07-11 14:01:54.612979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.612998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.620869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e9e10 00:31:52.222 [2024-07-11 14:01:54.621779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.621797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.629762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e9e10 00:31:52.222 [2024-07-11 14:01:54.630731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.630750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.638694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190fb048 00:31:52.222 [2024-07-11 14:01:54.639771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.639790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.647116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f6458 00:31:52.222 [2024-07-11 14:01:54.647906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.647925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.655857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e12d8 00:31:52.222 [2024-07-11 14:01:54.656089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.656108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.665894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f92c0 00:31:52.222 [2024-07-11 14:01:54.666971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.666991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:52.222 [2024-07-11 14:01:54.674936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190fbcf0 00:31:52.222 [2024-07-11 14:01:54.676077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.222 [2024-07-11 14:01:54.676096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:52.481 [2024-07-11 14:01:54.683894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e27f0 00:31:52.481 [2024-07-11 14:01:54.685029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.685047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.692812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f9f68 00:31:52.482 [2024-07-11 14:01:54.693860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.693880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.701711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190fb8b8 00:31:52.482 [2024-07-11 14:01:54.702768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.702787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.710661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f7100 00:31:52.482 [2024-07-11 14:01:54.711801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.711820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.719728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ed920 00:31:52.482 [2024-07-11 14:01:54.720767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.720787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.728951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ea680 00:31:52.482 [2024-07-11 14:01:54.729498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.729518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.736919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f1868 00:31:52.482 [2024-07-11 14:01:54.737616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.737635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.745774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e38d0 00:31:52.482 [2024-07-11 14:01:54.746523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.746542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.754718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f9f68 00:31:52.482 [2024-07-11 14:01:54.755496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.755514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.763630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e88f8 00:31:52.482 [2024-07-11 14:01:54.764446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.764468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.772557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ec408 00:31:52.482 [2024-07-11 14:01:54.773384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.773403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.781505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ec408 00:31:52.482 [2024-07-11 14:01:54.782364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.782389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.790370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e84c0 00:31:52.482 [2024-07-11 14:01:54.791193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.791212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.799249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190fc560 00:31:52.482 [2024-07-11 14:01:54.800069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.800087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.809684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e88f8 00:31:52.482 [2024-07-11 14:01:54.810574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.810593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.818584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e84c0 00:31:52.482 [2024-07-11 14:01:54.819443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.819462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.827468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e4578 00:31:52.482 [2024-07-11 14:01:54.828307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.828326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.836344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:52.482 [2024-07-11 14:01:54.836980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.836998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.845206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e3d08 00:31:52.482 [2024-07-11 14:01:54.845832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.845851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.854099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f7970 00:31:52.482 [2024-07-11 14:01:54.854708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.854727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.862295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f92c0 00:31:52.482 [2024-07-11 14:01:54.863496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.863516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.871166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e23b8 00:31:52.482 [2024-07-11 14:01:54.872156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.872179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.879475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e95a0 00:31:52.482 [2024-07-11 14:01:54.879624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.879642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.888597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f2d80 00:31:52.482 [2024-07-11 14:01:54.889217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.889236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.897476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f9f68 00:31:52.482 [2024-07-11 14:01:54.898103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.898121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.906387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f7100 00:31:52.482 [2024-07-11 14:01:54.907037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.907055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.915268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e49b0 00:31:52.482 [2024-07-11 14:01:54.915915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.915934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.924155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.482 [2024-07-11 14:01:54.924819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.924838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:52.482 [2024-07-11 14:01:54.933064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:52.482 [2024-07-11 14:01:54.933743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.482 [2024-07-11 14:01:54.933763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:54.942071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e6fa8 00:31:52.741 [2024-07-11 14:01:54.942753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:54.942772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:54.950997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:54.951686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:54.951705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:54.959860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:54.960557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:54.960576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:54.968927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:54.969632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:54.969651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:54.977848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:54.978563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:54.978583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:54.986722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:54.987444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:54.987463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:54.995612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:54.996350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:54.996372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.004529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.005274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.005293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.013601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.014356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.014374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.022501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.023269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.023288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.031399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.032173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.032192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.040285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.041064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.041082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.049212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.049999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.050018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.058209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.059007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.059027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.067172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.067975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.067994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.076099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.076925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.076943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.085057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.085880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.085900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.093968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.094807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.094825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.102904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.103750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.103771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.111805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.112657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.112676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.120690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:52.741 [2024-07-11 14:01:55.121543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.741 [2024-07-11 14:01:55.121562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:52.741 [2024-07-11 14:01:55.129561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190edd58 00:31:52.741 [2024-07-11 14:01:55.130412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.742 [2024-07-11 14:01:55.130430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:52.742 [2024-07-11 14:01:55.137985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ed4e8 00:31:52.742 [2024-07-11 14:01:55.138746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.742 [2024-07-11 14:01:55.138764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:52.742 [2024-07-11 14:01:55.147064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f8618 00:31:52.742 [2024-07-11 14:01:55.147509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.742 [2024-07-11 14:01:55.147528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:52.742 [2024-07-11 14:01:55.156080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190fb048 00:31:52.742 [2024-07-11 14:01:55.156682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.742 [2024-07-11 14:01:55.156702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:52.742 [2024-07-11 14:01:55.164911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190fc128 00:31:52.742 [2024-07-11 14:01:55.165530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.742 [2024-07-11 14:01:55.165549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:52.742 [2024-07-11 14:01:55.173853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f6cc8 00:31:52.742 [2024-07-11 14:01:55.174468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.742 [2024-07-11 14:01:55.174487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:52.742 [2024-07-11 14:01:55.182928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e5658 00:31:52.742 [2024-07-11 14:01:55.183549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.742 [2024-07-11 14:01:55.183568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:52.742 [2024-07-11 14:01:55.191843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:52.742 [2024-07-11 14:01:55.192476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:52.742 [2024-07-11 14:01:55.192496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:53.000 [2024-07-11 14:01:55.200813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190fbcf0 00:31:53.001 [2024-07-11 14:01:55.201457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.201478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.209717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.210369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.210388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.218644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.219315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.219333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.227532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.228204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.228226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.236412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.237088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.237107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.245333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.246016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.246034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.254210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.254905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.254924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.263090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.263798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.263816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.272016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.272729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.272748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.280916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.281640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.281659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.289851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.290589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.290609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.298735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.299475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.299493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.307607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.308395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.308414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.316708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.317469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.317488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.325591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.326359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.326379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.334482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.335262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.335280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.343461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.344251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.344270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.352365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.353164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.353182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.361246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.362049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.362068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.370089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f96f8 00:31:53.001 [2024-07-11 14:01:55.370892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.370911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.380037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f96f8 00:31:53.001 [2024-07-11 14:01:55.380863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.380882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.389062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190eea00 00:31:53.001 [2024-07-11 14:01:55.389876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.389895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:53.001 [2024-07-11 14:01:55.397973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e4578 00:31:53.001 [2024-07-11 14:01:55.398789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.001 [2024-07-11 14:01:55.398808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:53.002 [2024-07-11 14:01:55.406891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e23b8 00:31:53.002 [2024-07-11 14:01:55.407707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.002 [2024-07-11 14:01:55.407726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:53.002 [2024-07-11 14:01:55.415829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e7c50 00:31:53.002 [2024-07-11 14:01:55.416642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.002 [2024-07-11 14:01:55.416662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:53.002 [2024-07-11 14:01:55.424711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f2510 00:31:53.002 [2024-07-11 14:01:55.425508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.002 [2024-07-11 14:01:55.425527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:53.002 [2024-07-11 14:01:55.433627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f3a28 00:31:53.002 [2024-07-11 14:01:55.434411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.002 [2024-07-11 14:01:55.434429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:53.002 [2024-07-11 14:01:55.442501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190fa3a0 00:31:53.002 [2024-07-11 14:01:55.443275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.002 [2024-07-11 14:01:55.443295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:53.002 [2024-07-11 14:01:55.450770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ebfd0 00:31:53.002 [2024-07-11 14:01:55.452223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.002 [2024-07-11 14:01:55.452241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:53.260 [2024-07-11 14:01:55.459633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e38d0 00:31:53.260 [2024-07-11 14:01:55.460519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.260 [2024-07-11 14:01:55.460538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:53.260 [2024-07-11 14:01:55.468618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e84c0 00:31:53.260 [2024-07-11 14:01:55.469522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.260 [2024-07-11 14:01:55.469540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:53.260 [2024-07-11 14:01:55.477635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4f40 00:31:53.260 [2024-07-11 14:01:55.478582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.260 [2024-07-11 14:01:55.478600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:53.260 [2024-07-11 14:01:55.486007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f1430 00:31:53.260 [2024-07-11 14:01:55.486115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.260 [2024-07-11 14:01:55.486133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:53.260 [2024-07-11 14:01:55.494997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e9168 00:31:53.260 [2024-07-11 14:01:55.495257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.260 [2024-07-11 14:01:55.495276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:53.260 [2024-07-11 14:01:55.505313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190fc128 00:31:53.260 [2024-07-11 14:01:55.506615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.260 [2024-07-11 14:01:55.506634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.260 [2024-07-11 14:01:55.514209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ebb98 00:31:53.260 [2024-07-11 14:01:55.515523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.260 [2024-07-11 14:01:55.515541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.260 [2024-07-11 14:01:55.522918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f4b08 00:31:53.260 [2024-07-11 14:01:55.523999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.260 [2024-07-11 14:01:55.524018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:53.260 [2024-07-11 14:01:55.530757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190e5a90 00:31:53.260 [2024-07-11 14:01:55.531298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.261 [2024-07-11 14:01:55.531317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:53.261 [2024-07-11 14:01:55.539680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190ea680 00:31:53.261 [2024-07-11 14:01:55.540234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.261 [2024-07-11 14:01:55.540255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:53.261 [2024-07-11 14:01:55.548566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f35f0 00:31:53.261 [2024-07-11 14:01:55.549119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.261 [2024-07-11 14:01:55.549137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:53.261 [2024-07-11 14:01:55.557495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c55d30) with pdu=0x2000190f0350 00:31:53.261 [2024-07-11 14:01:55.558066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.261 [2024-07-11 14:01:55.558088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:53.261 00:31:53.261 Latency(us) 00:31:53.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.261 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.261 nvme0n1 : 2.00 28589.58 111.68 0.00 0.00 4472.00 2037.31 10542.75 00:31:53.261 =================================================================================================================== 00:31:53.261 Total : 28589.58 111.68 0.00 0.00 4472.00 2037.31 10542.75 00:31:53.261 0 00:31:53.261 14:01:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:53.261 14:01:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:53.261 14:01:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:53.261 | .driver_specific 00:31:53.261 | .nvme_error 00:31:53.261 | .status_code 00:31:53.261 | .command_transient_transport_error' 00:31:53.261 14:01:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:53.518 14:01:55 -- host/digest.sh@71 -- # (( 224 > 0 )) 00:31:53.518 14:01:55 -- host/digest.sh@73 -- # killprocess 1778484 00:31:53.518 14:01:55 -- common/autotest_common.sh@926 -- # '[' -z 1778484 ']' 00:31:53.518 14:01:55 -- common/autotest_common.sh@930 -- # kill -0 1778484 00:31:53.518 14:01:55 -- common/autotest_common.sh@931 -- # uname 00:31:53.518 14:01:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:53.518 14:01:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1778484 00:31:53.518 14:01:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:53.518 14:01:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:53.518 14:01:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1778484' 00:31:53.518 killing process with pid 1778484 00:31:53.518 14:01:55 -- common/autotest_common.sh@945 -- # kill 1778484 00:31:53.518 Received shutdown signal, test time was about 2.000000 seconds 00:31:53.518 00:31:53.518 Latency(us) 00:31:53.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.518 =================================================================================================================== 00:31:53.518 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:53.518 14:01:55 -- common/autotest_common.sh@950 -- # wait 1778484 00:31:53.777 14:01:55 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:31:53.777 14:01:55 -- host/digest.sh@54 -- # local rw bs qd 00:31:53.777 14:01:55 -- host/digest.sh@56 -- # rw=randwrite 00:31:53.777 14:01:55 -- host/digest.sh@56 -- # bs=131072 00:31:53.777 14:01:55 -- host/digest.sh@56 -- # qd=16 00:31:53.777 14:01:55 -- host/digest.sh@58 -- # bperfpid=1779066 00:31:53.777 14:01:55 -- host/digest.sh@60 -- # waitforlisten 1779066 /var/tmp/bperf.sock 00:31:53.777 14:01:55 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:53.777 14:01:55 -- common/autotest_common.sh@819 -- # '[' -z 1779066 ']' 00:31:53.777 14:01:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:53.777 14:01:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:53.777 14:01:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:53.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:53.777 14:01:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:53.777 14:01:55 -- common/autotest_common.sh@10 -- # set +x 00:31:53.777 [2024-07-11 14:01:56.017713] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:53.777 [2024-07-11 14:01:56.017759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779066 ] 00:31:53.777 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:53.777 Zero copy mechanism will not be used. 00:31:53.777 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.777 [2024-07-11 14:01:56.072329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.777 [2024-07-11 14:01:56.111378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.711 14:01:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:54.711 14:01:56 -- common/autotest_common.sh@852 -- # return 0 00:31:54.711 14:01:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:54.711 14:01:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:54.711 14:01:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:54.711 14:01:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.711 14:01:56 -- common/autotest_common.sh@10 -- # set +x 00:31:54.711 14:01:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.711 14:01:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:54.711 14:01:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:54.970 nvme0n1 00:31:54.970 14:01:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:54.970 14:01:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.970 14:01:57 -- common/autotest_common.sh@10 -- # set +x 00:31:54.970 14:01:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.970 14:01:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:54.970 14:01:57 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:55.230 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:55.230 Zero copy mechanism will not be used. 00:31:55.230 Running I/O for 2 seconds... 00:31:55.230 [2024-07-11 14:01:57.495500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.230 [2024-07-11 14:01:57.495647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.230 [2024-07-11 14:01:57.495675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.230 [2024-07-11 14:01:57.504001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.230 [2024-07-11 14:01:57.504134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.230 [2024-07-11 14:01:57.504157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.230 [2024-07-11 14:01:57.509665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.230 [2024-07-11 14:01:57.509778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.230 [2024-07-11 14:01:57.509798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.230 [2024-07-11 14:01:57.514505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.230 [2024-07-11 14:01:57.514571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.230 [2024-07-11 14:01:57.514589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.230 [2024-07-11 14:01:57.519018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.230 [2024-07-11 14:01:57.519105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.230 [2024-07-11 14:01:57.519123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.230 [2024-07-11 14:01:57.523730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.230 [2024-07-11 14:01:57.523823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.230 [2024-07-11 14:01:57.523842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.230 [2024-07-11 14:01:57.528060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.230 [2024-07-11 14:01:57.528132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.230 [2024-07-11 14:01:57.528150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.230 [2024-07-11 14:01:57.533124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.230 [2024-07-11 14:01:57.533486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.230 [2024-07-11 14:01:57.533507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.230 [2024-07-11 14:01:57.538356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.538601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.538621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.543869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.544007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.544025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.550221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.550374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.550397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.556655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.556768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.556787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.564183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.564368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.564386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.570854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.570984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.571002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.577193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.577367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.577384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.583614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.583915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.583934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.589318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.589584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.589605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.596426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.596669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.596689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.602759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.602885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.602903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.608042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.608182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.608201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.612389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.612587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.612605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.616820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.616935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.616953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.620991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.621099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.621116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.625089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.625238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.625257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.628804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.629028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.629047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.632440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.632647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.632668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.636042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.636144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.636170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.640276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.640399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.640417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.644077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.644157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.644183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.647707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.647796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.647818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.651330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.651459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.651477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.654963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.655127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.655145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.658644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.658881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.658900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.662387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.662583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.662601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.666345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.666421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.666438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.669925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.670047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.670064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.673468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.673545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.673566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.231 [2024-07-11 14:01:57.677011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.231 [2024-07-11 14:01:57.677098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.231 [2024-07-11 14:01:57.677116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.232 [2024-07-11 14:01:57.680563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.232 [2024-07-11 14:01:57.680642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.232 [2024-07-11 14:01:57.680660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.232 [2024-07-11 14:01:57.684257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.232 [2024-07-11 14:01:57.684427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.232 [2024-07-11 14:01:57.684447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.687931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.688165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.688185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.691539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.691728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.691746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.695403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.695498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.695515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.699463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.699606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.699623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.703031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.703116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.703134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.706570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.706650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.706668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.710504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.710625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.710643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.714345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.714508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.714528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.718046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.718300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.718320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.721616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.721814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.721832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.725172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.725300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.725318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.729216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.729346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.729364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.732803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.732878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.732897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.736338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.736415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.736434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.739874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.739993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.740011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.743468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.743629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.743647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.747062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.747333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.747353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.750694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.750874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.750892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.754350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.754473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.754491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.757938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.758073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.758091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.761544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.761619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.492 [2024-07-11 14:01:57.761637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.492 [2024-07-11 14:01:57.765082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.492 [2024-07-11 14:01:57.765165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.765184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.768706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.768825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.768849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.772370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.772539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.772556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.776055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.776302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.776322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.779650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.779824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.779842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.783623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.783768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.783786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.789018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.789175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.789193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.794255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.794350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.794368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.798650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.798782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.798799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.802819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.802926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.802944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.806928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.807039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.807057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.812437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.812583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.812600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.817886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.818124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.818144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.824512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.824664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.824682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.831637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.831827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.831844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.839212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.839393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.839410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.846502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.846708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.846727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.854140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.854286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.854304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.861167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.861305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.861322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.867823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.868079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.868098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.873521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.873647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.873664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.880382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.880589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.880608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.887395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.887539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.887556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.893976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.894120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.894139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.900729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.900918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.900935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.907689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.907913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.907932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.914319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.914606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.914625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.919504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.919659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.919686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.925690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.925887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.925904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.493 [2024-07-11 14:01:57.931970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.493 [2024-07-11 14:01:57.932175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.493 [2024-07-11 14:01:57.932192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.494 [2024-07-11 14:01:57.938218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.494 [2024-07-11 14:01:57.938434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.494 [2024-07-11 14:01:57.938453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.494 [2024-07-11 14:01:57.944905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.494 [2024-07-11 14:01:57.945116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.494 [2024-07-11 14:01:57.945136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.754 [2024-07-11 14:01:57.951351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.754 [2024-07-11 14:01:57.951583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.754 [2024-07-11 14:01:57.951602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.754 [2024-07-11 14:01:57.957949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.754 [2024-07-11 14:01:57.958176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.754 [2024-07-11 14:01:57.958195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.754 [2024-07-11 14:01:57.964443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.754 [2024-07-11 14:01:57.964734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:57.964753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:57.970923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:57.971154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:57.971178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:57.977370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:57.977524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:57.977541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:57.983738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:57.983919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:57.983937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:57.990753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:57.990879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:57.990898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:57.996799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:57.996991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:57.997008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.000997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.001078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.001097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.004839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.005016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.005034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.009071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.009268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.009285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.012904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.013094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.013112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.016658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.016768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.016786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.020731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.020841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.020859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.025324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.025417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.025435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.030121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.030229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.030247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.034563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.034627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.034645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.039063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.039219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.039236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.043238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.043410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.043428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.047930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.048104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.048121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.051949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.052051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.052068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.055914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.056024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.056045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.060445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.060522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.060543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.064568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.064686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.064705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.068867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.068928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.068946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.073322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.073476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.073493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.077804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.077981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.078000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.081934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.082091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.082109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.086670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.086782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.086800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.090769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.090884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.090902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.094933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.095033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.755 [2024-07-11 14:01:58.095051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.755 [2024-07-11 14:01:58.099122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.755 [2024-07-11 14:01:58.099258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.099276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.103101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.103226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.103243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.107815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.107968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.107988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.112224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.112335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.112353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.117459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.117635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.117653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.121912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.122020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.122038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.126508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.126638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.126656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.131206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.131281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.131298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.135358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.135485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.135502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.139526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.139606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.139623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.143856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.144012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.144032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.147966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.148169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.148189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.152552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.152711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.152728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.157990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.158088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.158106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.161701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.161839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.161857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.165371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.165456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.165474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.170472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.170618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.170639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.174424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.174521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.174539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.179165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.179323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.179342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.184122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.184300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.184319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.188514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.188693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.188713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.193457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.193526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.193544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.197727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.197804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.197822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.201465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.201543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.201562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:55.756 [2024-07-11 14:01:58.205185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:55.756 [2024-07-11 14:01:58.205328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:55.756 [2024-07-11 14:01:58.205346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.016 [2024-07-11 14:01:58.209015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.016 [2024-07-11 14:01:58.209171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.016 [2024-07-11 14:01:58.209189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.016 [2024-07-11 14:01:58.213980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.016 [2024-07-11 14:01:58.214145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.016 [2024-07-11 14:01:58.214172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.016 [2024-07-11 14:01:58.217727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.016 [2024-07-11 14:01:58.217933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.016 [2024-07-11 14:01:58.217952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.016 [2024-07-11 14:01:58.221476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.016 [2024-07-11 14:01:58.221675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.016 [2024-07-11 14:01:58.221696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.016 [2024-07-11 14:01:58.225779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.016 [2024-07-11 14:01:58.225888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.016 [2024-07-11 14:01:58.225905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.230248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.230375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.230393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.234100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.234185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.234204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.238319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.238451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.238468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.243077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.243275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.243294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.250272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.250492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.250511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.256283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.256439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.256456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.261482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.261713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.261732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.267785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.267869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.267887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.272500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.272625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.272642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.276568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.276666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.276683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.280956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.281037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.281054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.285342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.285462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.285479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.289539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.289721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.289744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.293308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.293547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.293566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.296957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.297180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.297200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.300881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.301024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.301042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.305096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.305233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.305252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.308743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.308823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.308841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.312353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.312437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.312455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.315958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.316077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.316095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.319641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.319809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.319826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.323589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.323799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.323819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.327787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.327901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.327919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.331406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.331639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.331659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.335851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.335929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.335946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.339493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.339595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.339612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.343036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.343119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.343137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.346640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.346760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.346778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.350257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.017 [2024-07-11 14:01:58.350424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.017 [2024-07-11 14:01:58.350444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.017 [2024-07-11 14:01:58.353879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.354132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.354151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.357680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.357854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.357872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.361934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.362108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.362128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.368218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.368346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.368363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.372792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.372901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.372919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.377765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.377907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.377924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.383018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.383212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.383229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.388882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.389122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.389141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.396077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.396355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.396374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.404098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.404310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.404331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.410881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.411021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.411040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.418214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.418422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.418442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.424986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.425192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.425210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.431836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.432047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.432066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.440573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.440884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.440904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.449644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.449778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.449796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.457553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.457782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.457801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.018 [2024-07-11 14:01:58.465788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.018 [2024-07-11 14:01:58.466018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.018 [2024-07-11 14:01:58.466038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.472855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.472988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.473005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.479531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.479715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.479732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.486936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.487196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.487214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.493902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.494062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.494080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.501593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.501776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.501794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.509516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.509766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.509785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.517985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.518189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.518208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.525664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.525790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.525808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.533236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.533376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.533394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.540689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.540851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.540871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.548572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.548716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.548735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.553768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.553881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.553899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.557610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.557680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.557698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.561456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.561657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.561676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.565322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.565507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.565526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.279 [2024-07-11 14:01:58.569026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.279 [2024-07-11 14:01:58.569129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.279 [2024-07-11 14:01:58.569146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.572746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.572842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.572860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.576427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.576503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.576524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.580370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.580504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.580522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.583989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.584094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.584112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.587883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.588035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.588053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.591605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.591820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.591839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.595253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.595449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.595467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.598925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.599029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.599047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.602562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.602640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.602658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.606168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.606248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.606267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.609801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.609923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.609943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.613932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.614008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.614026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.617844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.617968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.617987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.621567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.621776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.621794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.625304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.625577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.625597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.633093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.633491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.633511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.642259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.642419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.642436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.648970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.649060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.649077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.654812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.654925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.654943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.660332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.660412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.660431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.665378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.665487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.665505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.670029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.670217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.670238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.674066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.674297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.674317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.677818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.677993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.678011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.681557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.681651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.681670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.685224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.685307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.685325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.688952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.689045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.689062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.692599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.280 [2024-07-11 14:01:58.692681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.280 [2024-07-11 14:01:58.692705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.280 [2024-07-11 14:01:58.696283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.281 [2024-07-11 14:01:58.696400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.281 [2024-07-11 14:01:58.696418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.281 [2024-07-11 14:01:58.699973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.281 [2024-07-11 14:01:58.700144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.281 [2024-07-11 14:01:58.700169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.281 [2024-07-11 14:01:58.703703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.281 [2024-07-11 14:01:58.703939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.281 [2024-07-11 14:01:58.703959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.281 [2024-07-11 14:01:58.707369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.281 [2024-07-11 14:01:58.707564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.281 [2024-07-11 14:01:58.707584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.281 [2024-07-11 14:01:58.711073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.281 [2024-07-11 14:01:58.711184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.281 [2024-07-11 14:01:58.711203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.281 [2024-07-11 14:01:58.714810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.281 [2024-07-11 14:01:58.714946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.281 [2024-07-11 14:01:58.714965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.281 [2024-07-11 14:01:58.718494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.281 [2024-07-11 14:01:58.718576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.281 [2024-07-11 14:01:58.718594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.281 [2024-07-11 14:01:58.722200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.281 [2024-07-11 14:01:58.722318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.281 [2024-07-11 14:01:58.722337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.281 [2024-07-11 14:01:58.726354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.281 [2024-07-11 14:01:58.726460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.281 [2024-07-11 14:01:58.726481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.281 [2024-07-11 14:01:58.730340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.281 [2024-07-11 14:01:58.730515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.281 [2024-07-11 14:01:58.730533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.734841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.735054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.735074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.740099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.740296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.740316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.744537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.744618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.744636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.749290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.749420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.749437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.753331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.753431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.753449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.757138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.757222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.757241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.760965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.761087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.761106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.764765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.764937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.764955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.768950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.769174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.769195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.772905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.773116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.773135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.776655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.776744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.776762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.780426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.780551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.780569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.784402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.784479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.784497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.789154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.789228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.789246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.794255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.794368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.794386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.798817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.798967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.798989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.804420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.804524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.804542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.809126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.809300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.542 [2024-07-11 14:01:58.809318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.542 [2024-07-11 14:01:58.813248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.542 [2024-07-11 14:01:58.813332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.813350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.816977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.817121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.817139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.820657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.820747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.820765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.824362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.824450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.824469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.828029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.828172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.828190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.831805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.831963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.831981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.835521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.835765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.835789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.839144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.839318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.839336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.842837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.842920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.842939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.846503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.846624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.846641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.850203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.850282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.850299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.853868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.853929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.853947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.857879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.858008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.858027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.861644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.861814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.861832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.865387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.865618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.865639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.869014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.869212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.869230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.872677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.872776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.872794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.876342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.876446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.876465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.880449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.880537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.880555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.884114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.884216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.884234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.887741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.887851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.887871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.891398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.891565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.891585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.895099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.895330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.895350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.899380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.899590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.899613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.903006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.903083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.903101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.906609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.906744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.906761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.910583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.910667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.910685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.914548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.914608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.914626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.919229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.919289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.543 [2024-07-11 14:01:58.919308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.543 [2024-07-11 14:01:58.924611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.543 [2024-07-11 14:01:58.924824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.924843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.929039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.929280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.929299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.933529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.933684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.933703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.937455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.937583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.937604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.941390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.941514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.941532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.945081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.945167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.945185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.948784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.948865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.948884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.952846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.953020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.953037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.956748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.956887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.956905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.960425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.960654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.960674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.964028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.964208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.964226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.967681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.967803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.967821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.971469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.971589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.971607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.975582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.975674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.975691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.979204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.979300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.979318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.982846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.983001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.983019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.986499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.986670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.986687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.990187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.990404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.990423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.544 [2024-07-11 14:01:58.993890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.544 [2024-07-11 14:01:58.994077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.544 [2024-07-11 14:01:58.994096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.805 [2024-07-11 14:01:58.997594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.805 [2024-07-11 14:01:58.997724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.805 [2024-07-11 14:01:58.997743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.805 [2024-07-11 14:01:59.001293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.805 [2024-07-11 14:01:59.001407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.805 [2024-07-11 14:01:59.001428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.805 [2024-07-11 14:01:59.005690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.805 [2024-07-11 14:01:59.005777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.805 [2024-07-11 14:01:59.005797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.805 [2024-07-11 14:01:59.010755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.805 [2024-07-11 14:01:59.010827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.805 [2024-07-11 14:01:59.010846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.805 [2024-07-11 14:01:59.015018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.805 [2024-07-11 14:01:59.015176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.805 [2024-07-11 14:01:59.015194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.805 [2024-07-11 14:01:59.019459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.805 [2024-07-11 14:01:59.019574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.805 [2024-07-11 14:01:59.019591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.805 [2024-07-11 14:01:59.023776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.805 [2024-07-11 14:01:59.023935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.805 [2024-07-11 14:01:59.023955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.805 [2024-07-11 14:01:59.028010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.805 [2024-07-11 14:01:59.028177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.805 [2024-07-11 14:01:59.028213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.805 [2024-07-11 14:01:59.032414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.805 [2024-07-11 14:01:59.032548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.805 [2024-07-11 14:01:59.032566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.805 [2024-07-11 14:01:59.036732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.805 [2024-07-11 14:01:59.036886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.805 [2024-07-11 14:01:59.036904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.805 [2024-07-11 14:01:59.041172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.805 [2024-07-11 14:01:59.041263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.805 [2024-07-11 14:01:59.041285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.805 [2024-07-11 14:01:59.045352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.045425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.045443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.049781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.049970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.049990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.054271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.054451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.054471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.058550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.058783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.058803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.062204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.062390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.062409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.065864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.065997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.066015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.069556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.069703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.069722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.073190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.073269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.073287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.076831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.076918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.076936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.080496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.080674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.080692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.084217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.084368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.084386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.088278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.088508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.088527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.091880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.092062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.092080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.095451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.095539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.095557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.099121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.099252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.099270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.103213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.103317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.103334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.106953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.107038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.107059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.110614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.110781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.110799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.114254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.114426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.114446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.117901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.118125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.118145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.121487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.121684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.121701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.125142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.125283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.125301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.128766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.128925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.128943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.132772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.132966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.132984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.137993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.138135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.138155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.142804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.142993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.143016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.147037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.147202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.147220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.151470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.151700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.151720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.155773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.155908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.155926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.160061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.160234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.160251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.164271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.164423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.806 [2024-07-11 14:01:59.164441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.806 [2024-07-11 14:01:59.169091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.806 [2024-07-11 14:01:59.169194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.169211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.174322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.174395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.174412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.179642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.179821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.179838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.183819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.183970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.183988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.188309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.188508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.188526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.192464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.192637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.192657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.197016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.197140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.197158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.201256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.201375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.201392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.205443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.205539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.205556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.209865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.209968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.209987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.214189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.214343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.214360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.218506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.218705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.218724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.223186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.223432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.223451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.227378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.227581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.227600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.231943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.232056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.232073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.236216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.236350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.236368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.240407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.240473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.240491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.244902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.245018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.245035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.249111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.249265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.249283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.253245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.253365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.253382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:56.807 [2024-07-11 14:01:59.257676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:56.807 [2024-07-11 14:01:59.257929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:56.807 [2024-07-11 14:01:59.257953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.068 [2024-07-11 14:01:59.261863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.068 [2024-07-11 14:01:59.262035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.068 [2024-07-11 14:01:59.262054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.068 [2024-07-11 14:01:59.266390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.068 [2024-07-11 14:01:59.266544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.068 [2024-07-11 14:01:59.266561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.068 [2024-07-11 14:01:59.270783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.068 [2024-07-11 14:01:59.270863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.068 [2024-07-11 14:01:59.270881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.068 [2024-07-11 14:01:59.275192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.068 [2024-07-11 14:01:59.275292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.068 [2024-07-11 14:01:59.275310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.068 [2024-07-11 14:01:59.279353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.068 [2024-07-11 14:01:59.279470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.068 [2024-07-11 14:01:59.279488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.068 [2024-07-11 14:01:59.283336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.068 [2024-07-11 14:01:59.283504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.068 [2024-07-11 14:01:59.283524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.068 [2024-07-11 14:01:59.286962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.068 [2024-07-11 14:01:59.287124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.068 [2024-07-11 14:01:59.287142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.068 [2024-07-11 14:01:59.290683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.290943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.290962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.294449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.294734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.294754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.298154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.298316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.298334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.301796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.301925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.301943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.305389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.305462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.305481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.308984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.309065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.309085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.312615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.312778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.312797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.316193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.316367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.316387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.319863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.320132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.320152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.323461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.323694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.323718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.327070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.327208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.327226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.330673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.330788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.330805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.334283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.334363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.334380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.337863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.337966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.337983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.341840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.342004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.342022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.345814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.345984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.346003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.349478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.349738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.349758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.353416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.353626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.353645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.357339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.357478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.357496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.361020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.361128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.361145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.364859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.364922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.364939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.368907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.369023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.369040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.373216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.373392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.373411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.378564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.378671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.378688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.383293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.383541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.383561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.387725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.387964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.387984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.391950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.392130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.392149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.396175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.396285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.396303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.400379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.400460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.069 [2024-07-11 14:01:59.400477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.069 [2024-07-11 14:01:59.404534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.069 [2024-07-11 14:01:59.404650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.404668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.408816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.408990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.409009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.413257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.413388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.413405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.417676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.417913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.417933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.421936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.422219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.422239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.426475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.426632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.426651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.430897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.430998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.431019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.435221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.435291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.435308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.439586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.439672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.439690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.443926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.444105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.444125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.448322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.448450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.448468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.452856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.453101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.453121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.457391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.457634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.457653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.461655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.461819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.461839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.466229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.466352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.466369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.470424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.470539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.470557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.474658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.474756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.474773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.478870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.479042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.479060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.483094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.483247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.483265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:57.070 [2024-07-11 14:01:59.487516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c56070) with pdu=0x2000190fef90 00:31:57.070 [2024-07-11 14:01:59.487581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.070 [2024-07-11 14:01:59.487599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:57.070 00:31:57.070 Latency(us) 00:31:57.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.070 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:57.070 nvme0n1 : 2.00 6814.01 851.75 0.00 0.00 2344.08 1510.18 11454.55 00:31:57.070 =================================================================================================================== 00:31:57.070 Total : 6814.01 851.75 0.00 0.00 2344.08 1510.18 11454.55 00:31:57.070 0 00:31:57.070 14:01:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:57.070 14:01:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:57.070 14:01:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:57.070 | .driver_specific 00:31:57.070 | .nvme_error 00:31:57.070 | .status_code 00:31:57.070 | .command_transient_transport_error' 00:31:57.070 14:01:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:57.330 14:01:59 -- host/digest.sh@71 -- # (( 440 > 0 )) 00:31:57.330 14:01:59 -- host/digest.sh@73 -- # killprocess 1779066 00:31:57.330 14:01:59 -- common/autotest_common.sh@926 -- # '[' -z 1779066 ']' 00:31:57.330 14:01:59 -- common/autotest_common.sh@930 -- # kill -0 1779066 00:31:57.330 14:01:59 -- common/autotest_common.sh@931 -- # uname 00:31:57.330 14:01:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:57.330 14:01:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1779066 00:31:57.330 14:01:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:57.330 14:01:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:57.330 14:01:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1779066' 00:31:57.330 killing process with pid 1779066 00:31:57.330 14:01:59 -- common/autotest_common.sh@945 -- # kill 1779066 00:31:57.330 Received shutdown signal, test time was about 2.000000 seconds 00:31:57.330 00:31:57.330 Latency(us) 00:31:57.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.330 =================================================================================================================== 00:31:57.330 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:57.330 14:01:59 -- common/autotest_common.sh@950 -- # wait 1779066 00:31:57.589 14:01:59 -- host/digest.sh@115 -- # killprocess 1777050 00:31:57.589 14:01:59 -- common/autotest_common.sh@926 -- # '[' -z 1777050 ']' 00:31:57.589 14:01:59 -- common/autotest_common.sh@930 -- # kill -0 1777050 00:31:57.589 14:01:59 -- common/autotest_common.sh@931 -- # uname 00:31:57.589 14:01:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:57.589 14:01:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1777050 00:31:57.589 14:01:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:57.589 14:01:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:57.589 14:01:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1777050' 00:31:57.589 killing process with pid 1777050 00:31:57.589 14:01:59 -- common/autotest_common.sh@945 -- # kill 1777050 00:31:57.589 14:01:59 -- common/autotest_common.sh@950 -- # wait 1777050 00:31:57.849 00:31:57.849 real 0m15.812s 00:31:57.849 user 0m30.471s 00:31:57.849 sys 0m4.678s 00:31:57.849 14:02:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:57.849 14:02:00 -- common/autotest_common.sh@10 -- # set +x 00:31:57.849 ************************************ 00:31:57.849 END TEST nvmf_digest_error 00:31:57.849 ************************************ 00:31:57.849 14:02:00 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:31:57.849 14:02:00 -- host/digest.sh@139 -- # nvmftestfini 00:31:57.849 14:02:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:57.849 14:02:00 -- nvmf/common.sh@116 -- # sync 00:31:57.849 14:02:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:57.849 14:02:00 -- nvmf/common.sh@119 -- # set +e 00:31:57.849 14:02:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:57.849 14:02:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:57.849 rmmod nvme_tcp 00:31:57.849 rmmod nvme_fabrics 00:31:57.849 rmmod nvme_keyring 00:31:57.849 14:02:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:57.849 14:02:00 -- nvmf/common.sh@123 -- # set -e 00:31:57.849 14:02:00 -- nvmf/common.sh@124 -- # return 0 00:31:57.849 14:02:00 -- nvmf/common.sh@477 -- # '[' -n 1777050 ']' 00:31:57.849 14:02:00 -- nvmf/common.sh@478 -- # killprocess 1777050 00:31:57.849 14:02:00 -- common/autotest_common.sh@926 -- # '[' -z 1777050 ']' 00:31:57.849 14:02:00 -- common/autotest_common.sh@930 -- # kill -0 1777050 00:31:57.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1777050) - No such process 00:31:57.849 14:02:00 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1777050 is not found' 00:31:57.849 Process with pid 1777050 is not found 00:31:57.849 14:02:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:57.849 14:02:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:57.849 14:02:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:57.849 14:02:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:57.849 14:02:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:57.849 14:02:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.849 14:02:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:57.849 14:02:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.387 14:02:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:00.387 00:32:00.387 real 0m36.811s 00:32:00.387 user 0m57.415s 00:32:00.387 sys 0m13.040s 00:32:00.387 14:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.387 14:02:02 -- common/autotest_common.sh@10 -- # set +x 00:32:00.387 ************************************ 00:32:00.387 END TEST nvmf_digest 00:32:00.387 ************************************ 00:32:00.387 14:02:02 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:32:00.387 14:02:02 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:32:00.387 14:02:02 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:32:00.387 14:02:02 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:00.387 14:02:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:00.387 14:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:00.387 14:02:02 -- common/autotest_common.sh@10 -- # set +x 00:32:00.387 ************************************ 00:32:00.387 START TEST nvmf_bdevperf 00:32:00.387 ************************************ 00:32:00.387 14:02:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:00.387 * Looking for test storage... 00:32:00.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:00.387 14:02:02 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.387 14:02:02 -- nvmf/common.sh@7 -- # uname -s 00:32:00.387 14:02:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.387 14:02:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.387 14:02:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.387 14:02:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.387 14:02:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.387 14:02:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.387 14:02:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.387 14:02:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.387 14:02:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.387 14:02:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.387 14:02:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:00.387 14:02:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:00.387 14:02:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.387 14:02:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.387 14:02:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.387 14:02:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.387 14:02:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.387 14:02:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.387 14:02:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.387 14:02:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.387 14:02:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.387 14:02:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.387 14:02:02 -- paths/export.sh@5 -- # export PATH 00:32:00.387 14:02:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.387 14:02:02 -- nvmf/common.sh@46 -- # : 0 00:32:00.387 14:02:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:00.387 14:02:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:00.387 14:02:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:00.387 14:02:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.387 14:02:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.387 14:02:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:00.387 14:02:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:00.387 14:02:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:00.387 14:02:02 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:00.387 14:02:02 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:00.387 14:02:02 -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:00.387 14:02:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:00.387 14:02:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.387 14:02:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:00.387 14:02:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:00.387 14:02:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:00.387 14:02:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.387 14:02:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.387 14:02:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.387 14:02:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:00.387 14:02:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:00.387 14:02:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:00.387 14:02:02 -- common/autotest_common.sh@10 -- # set +x 00:32:05.656 14:02:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:05.656 14:02:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:05.656 14:02:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:05.656 14:02:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:05.656 14:02:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:05.656 14:02:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:05.656 14:02:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:05.656 14:02:07 -- nvmf/common.sh@294 -- # net_devs=() 00:32:05.656 14:02:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:05.656 14:02:07 -- nvmf/common.sh@295 -- # e810=() 00:32:05.656 14:02:07 -- nvmf/common.sh@295 -- # local -ga e810 00:32:05.656 14:02:07 -- nvmf/common.sh@296 -- # x722=() 00:32:05.656 14:02:07 -- nvmf/common.sh@296 -- # local -ga x722 00:32:05.656 14:02:07 -- nvmf/common.sh@297 -- # mlx=() 00:32:05.656 14:02:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:05.656 14:02:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.656 14:02:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.656 14:02:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.656 14:02:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.656 14:02:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.656 14:02:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.656 14:02:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.656 14:02:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.656 14:02:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.656 14:02:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.656 14:02:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.656 14:02:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:05.656 14:02:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:05.656 14:02:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:05.656 14:02:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:05.656 14:02:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:05.656 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:05.656 14:02:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:05.656 14:02:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:05.656 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:05.656 14:02:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:05.656 14:02:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:05.656 14:02:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.656 14:02:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:05.656 14:02:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.656 14:02:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:05.656 Found net devices under 0000:86:00.0: cvl_0_0 00:32:05.656 14:02:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.656 14:02:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:05.656 14:02:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.656 14:02:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:05.656 14:02:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.656 14:02:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:05.656 Found net devices under 0000:86:00.1: cvl_0_1 00:32:05.656 14:02:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.656 14:02:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:05.656 14:02:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:05.656 14:02:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:05.656 14:02:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:05.656 14:02:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.656 14:02:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.656 14:02:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.656 14:02:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:05.656 14:02:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.656 14:02:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.656 14:02:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:05.656 14:02:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.656 14:02:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.656 14:02:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:05.656 14:02:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:05.656 14:02:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.656 14:02:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.657 14:02:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.657 14:02:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.657 14:02:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:05.657 14:02:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.657 14:02:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.657 14:02:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.657 14:02:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:05.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:32:05.657 00:32:05.657 --- 10.0.0.2 ping statistics --- 00:32:05.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.657 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:32:05.657 14:02:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:32:05.657 00:32:05.657 --- 10.0.0.1 ping statistics --- 00:32:05.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.657 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:32:05.657 14:02:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.657 14:02:07 -- nvmf/common.sh@410 -- # return 0 00:32:05.657 14:02:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:05.657 14:02:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.657 14:02:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:05.657 14:02:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:05.657 14:02:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.657 14:02:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:05.657 14:02:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:05.657 14:02:07 -- host/bdevperf.sh@25 -- # tgt_init 00:32:05.657 14:02:07 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:05.657 14:02:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:05.657 14:02:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:05.657 14:02:07 -- common/autotest_common.sh@10 -- # set +x 00:32:05.657 14:02:07 -- nvmf/common.sh@469 -- # nvmfpid=1783038 00:32:05.657 14:02:07 -- nvmf/common.sh@470 -- # waitforlisten 1783038 00:32:05.657 14:02:07 -- common/autotest_common.sh@819 -- # '[' -z 1783038 ']' 00:32:05.657 14:02:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.657 14:02:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:05.657 14:02:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.657 14:02:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:05.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.657 14:02:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:05.657 14:02:07 -- common/autotest_common.sh@10 -- # set +x 00:32:05.657 [2024-07-11 14:02:07.374354] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:05.657 [2024-07-11 14:02:07.374397] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.657 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.657 [2024-07-11 14:02:07.430721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:05.657 [2024-07-11 14:02:07.471507] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:05.657 [2024-07-11 14:02:07.471623] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.657 [2024-07-11 14:02:07.471632] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.657 [2024-07-11 14:02:07.471638] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.657 [2024-07-11 14:02:07.471737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:05.657 [2024-07-11 14:02:07.471756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:05.657 [2024-07-11 14:02:07.471758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.916 14:02:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:05.916 14:02:08 -- common/autotest_common.sh@852 -- # return 0 00:32:05.916 14:02:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:05.916 14:02:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:05.916 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:32:05.916 14:02:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.916 14:02:08 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:05.916 14:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:05.916 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:32:05.916 [2024-07-11 14:02:08.215796] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.916 14:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:05.916 14:02:08 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:05.916 14:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:05.916 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:32:05.916 Malloc0 00:32:05.916 14:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:05.916 14:02:08 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:05.916 14:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:05.916 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:32:05.916 14:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:05.916 14:02:08 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:05.916 14:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:05.916 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:32:05.916 14:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:05.916 14:02:08 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:05.916 14:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:05.916 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:32:05.916 [2024-07-11 14:02:08.274766] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.916 14:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:05.916 14:02:08 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:05.916 14:02:08 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:05.916 14:02:08 -- nvmf/common.sh@520 -- # config=() 00:32:05.916 14:02:08 -- nvmf/common.sh@520 -- # local subsystem config 00:32:05.916 14:02:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:05.916 14:02:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:05.916 { 00:32:05.916 "params": { 00:32:05.916 "name": "Nvme$subsystem", 00:32:05.916 "trtype": "$TEST_TRANSPORT", 00:32:05.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:05.916 "adrfam": "ipv4", 00:32:05.916 "trsvcid": "$NVMF_PORT", 00:32:05.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:05.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:05.916 "hdgst": ${hdgst:-false}, 00:32:05.916 "ddgst": ${ddgst:-false} 00:32:05.916 }, 00:32:05.916 "method": "bdev_nvme_attach_controller" 00:32:05.916 } 00:32:05.916 EOF 00:32:05.916 )") 00:32:05.916 14:02:08 -- nvmf/common.sh@542 -- # cat 00:32:05.916 14:02:08 -- nvmf/common.sh@544 -- # jq . 00:32:05.916 14:02:08 -- nvmf/common.sh@545 -- # IFS=, 00:32:05.916 14:02:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:05.916 "params": { 00:32:05.916 "name": "Nvme1", 00:32:05.916 "trtype": "tcp", 00:32:05.916 "traddr": "10.0.0.2", 00:32:05.916 "adrfam": "ipv4", 00:32:05.916 "trsvcid": "4420", 00:32:05.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:05.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:05.916 "hdgst": false, 00:32:05.916 "ddgst": false 00:32:05.916 }, 00:32:05.916 "method": "bdev_nvme_attach_controller" 00:32:05.916 }' 00:32:05.916 [2024-07-11 14:02:08.321308] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:05.916 [2024-07-11 14:02:08.321352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783255 ] 00:32:05.916 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.176 [2024-07-11 14:02:08.375715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.176 [2024-07-11 14:02:08.413812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.435 Running I/O for 1 seconds... 00:32:07.373 00:32:07.373 Latency(us) 00:32:07.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.373 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:07.373 Verification LBA range: start 0x0 length 0x4000 00:32:07.373 Nvme1n1 : 1.01 16755.09 65.45 0.00 0.00 7610.27 918.93 14816.83 00:32:07.373 =================================================================================================================== 00:32:07.373 Total : 16755.09 65.45 0.00 0.00 7610.27 918.93 14816.83 00:32:07.631 14:02:09 -- host/bdevperf.sh@30 -- # bdevperfpid=1783493 00:32:07.631 14:02:09 -- host/bdevperf.sh@32 -- # sleep 3 00:32:07.631 14:02:09 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:07.631 14:02:09 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:07.631 14:02:09 -- nvmf/common.sh@520 -- # config=() 00:32:07.631 14:02:09 -- nvmf/common.sh@520 -- # local subsystem config 00:32:07.631 14:02:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:07.631 14:02:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:07.631 { 00:32:07.631 "params": { 00:32:07.631 "name": "Nvme$subsystem", 00:32:07.631 "trtype": "$TEST_TRANSPORT", 00:32:07.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.631 "adrfam": "ipv4", 00:32:07.631 "trsvcid": "$NVMF_PORT", 00:32:07.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.631 "hdgst": ${hdgst:-false}, 00:32:07.631 "ddgst": ${ddgst:-false} 00:32:07.631 }, 00:32:07.631 "method": "bdev_nvme_attach_controller" 00:32:07.631 } 00:32:07.631 EOF 00:32:07.631 )") 00:32:07.631 14:02:09 -- nvmf/common.sh@542 -- # cat 00:32:07.631 14:02:09 -- nvmf/common.sh@544 -- # jq . 00:32:07.631 14:02:09 -- nvmf/common.sh@545 -- # IFS=, 00:32:07.631 14:02:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:07.631 "params": { 00:32:07.631 "name": "Nvme1", 00:32:07.631 "trtype": "tcp", 00:32:07.631 "traddr": "10.0.0.2", 00:32:07.631 "adrfam": "ipv4", 00:32:07.631 "trsvcid": "4420", 00:32:07.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:07.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:07.631 "hdgst": false, 00:32:07.631 "ddgst": false 00:32:07.631 }, 00:32:07.631 "method": "bdev_nvme_attach_controller" 00:32:07.631 }' 00:32:07.631 [2024-07-11 14:02:09.907263] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:07.631 [2024-07-11 14:02:09.907310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783493 ] 00:32:07.631 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.631 [2024-07-11 14:02:09.962908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.631 [2024-07-11 14:02:09.997862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.890 Running I/O for 15 seconds... 00:32:10.424 14:02:12 -- host/bdevperf.sh@33 -- # kill -9 1783038 00:32:10.424 14:02:12 -- host/bdevperf.sh@35 -- # sleep 3 00:32:10.688 [2024-07-11 14:02:12.880935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.688 [2024-07-11 14:02:12.880973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.688 [2024-07-11 14:02:12.880991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.688 [2024-07-11 14:02:12.881000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.688 [2024-07-11 14:02:12.881010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.688 [2024-07-11 14:02:12.881018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.688 [2024-07-11 14:02:12.881027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.688 [2024-07-11 14:02:12.881034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.688 [2024-07-11 14:02:12.881043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.688 [2024-07-11 14:02:12.881049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.688 [2024-07-11 14:02:12.881057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.688 [2024-07-11 14:02:12.881064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.688 [2024-07-11 14:02:12.881074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.688 [2024-07-11 14:02:12.881081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.688 [2024-07-11 14:02:12.881089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.689 [2024-07-11 14:02:12.881524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.689 [2024-07-11 14:02:12.881540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.689 [2024-07-11 14:02:12.881557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.689 [2024-07-11 14:02:12.881735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.689 [2024-07-11 14:02:12.881783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.689 [2024-07-11 14:02:12.881791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.690 [2024-07-11 14:02:12.881798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.690 [2024-07-11 14:02:12.881814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.881829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.881843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.881857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.881872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.881887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.881902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.881917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.881934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.881950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.881965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.881981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.881990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.881996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.690 [2024-07-11 14:02:12.882027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.690 [2024-07-11 14:02:12.882043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.690 [2024-07-11 14:02:12.882058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.690 [2024-07-11 14:02:12.882073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.690 [2024-07-11 14:02:12.882136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.690 [2024-07-11 14:02:12.882152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.690 [2024-07-11 14:02:12.882217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.690 [2024-07-11 14:02:12.882292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.690 [2024-07-11 14:02:12.882444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.690 [2024-07-11 14:02:12.882453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.691 [2024-07-11 14:02:12.882474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.691 [2024-07-11 14:02:12.882488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.691 [2024-07-11 14:02:12.882504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.691 [2024-07-11 14:02:12.882520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.691 [2024-07-11 14:02:12.882569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.691 [2024-07-11 14:02:12.882583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.691 [2024-07-11 14:02:12.882736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:10.691 [2024-07-11 14:02:12.882751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:10.691 [2024-07-11 14:02:12.882978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.882986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16df380 is same with the state(5) to be set 00:32:10.691 [2024-07-11 14:02:12.882995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:10.691 [2024-07-11 14:02:12.883001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:10.691 [2024-07-11 14:02:12.883007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104432 len:8 PRP1 0x0 PRP2 0x0 00:32:10.691 [2024-07-11 14:02:12.883014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:10.691 [2024-07-11 14:02:12.883056] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16df380 was disconnected and freed. reset controller. 00:32:10.691 [2024-07-11 14:02:12.885006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.691 [2024-07-11 14:02:12.885062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.691 [2024-07-11 14:02:12.885592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.691 [2024-07-11 14:02:12.885787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.691 [2024-07-11 14:02:12.885818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.691 [2024-07-11 14:02:12.885841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.691 [2024-07-11 14:02:12.886123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.691 [2024-07-11 14:02:12.886456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.691 [2024-07-11 14:02:12.886466] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.691 [2024-07-11 14:02:12.886474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.691 [2024-07-11 14:02:12.888296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.691 [2024-07-11 14:02:12.897057] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.691 [2024-07-11 14:02:12.897512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.691 [2024-07-11 14:02:12.897793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.897825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.692 [2024-07-11 14:02:12.897849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.692 [2024-07-11 14:02:12.898131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.692 [2024-07-11 14:02:12.898417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.692 [2024-07-11 14:02:12.898427] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.692 [2024-07-11 14:02:12.898435] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.692 [2024-07-11 14:02:12.900228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.692 [2024-07-11 14:02:12.908917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.692 [2024-07-11 14:02:12.909278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.909564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.909596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.692 [2024-07-11 14:02:12.909618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.692 [2024-07-11 14:02:12.909772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.692 [2024-07-11 14:02:12.909895] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.692 [2024-07-11 14:02:12.909904] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.692 [2024-07-11 14:02:12.909911] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.692 [2024-07-11 14:02:12.911752] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.692 [2024-07-11 14:02:12.920689] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.692 [2024-07-11 14:02:12.921090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.921360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.921394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.692 [2024-07-11 14:02:12.921416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.692 [2024-07-11 14:02:12.921565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.692 [2024-07-11 14:02:12.921665] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.692 [2024-07-11 14:02:12.921674] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.692 [2024-07-11 14:02:12.921681] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.692 [2024-07-11 14:02:12.923485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.692 [2024-07-11 14:02:12.932492] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.692 [2024-07-11 14:02:12.932906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.933225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.933259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.692 [2024-07-11 14:02:12.933281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.692 [2024-07-11 14:02:12.933468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.692 [2024-07-11 14:02:12.933565] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.692 [2024-07-11 14:02:12.933574] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.692 [2024-07-11 14:02:12.933580] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.692 [2024-07-11 14:02:12.935992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.692 [2024-07-11 14:02:12.944814] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.692 [2024-07-11 14:02:12.945087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.945316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.945329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.692 [2024-07-11 14:02:12.945337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.692 [2024-07-11 14:02:12.945483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.692 [2024-07-11 14:02:12.945553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.692 [2024-07-11 14:02:12.945562] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.692 [2024-07-11 14:02:12.945569] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.692 [2024-07-11 14:02:12.947390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.692 [2024-07-11 14:02:12.956648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.692 [2024-07-11 14:02:12.956986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.957279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.957312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.692 [2024-07-11 14:02:12.957335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.692 [2024-07-11 14:02:12.957626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.692 [2024-07-11 14:02:12.957750] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.692 [2024-07-11 14:02:12.957759] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.692 [2024-07-11 14:02:12.957766] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.692 [2024-07-11 14:02:12.959404] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.692 [2024-07-11 14:02:12.968432] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.692 [2024-07-11 14:02:12.968837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.968941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.968972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.692 [2024-07-11 14:02:12.968994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.692 [2024-07-11 14:02:12.969500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.692 [2024-07-11 14:02:12.969644] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.692 [2024-07-11 14:02:12.969654] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.692 [2024-07-11 14:02:12.969661] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.692 [2024-07-11 14:02:12.971474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.692 [2024-07-11 14:02:12.980430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.692 [2024-07-11 14:02:12.980736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.980944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.980955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.692 [2024-07-11 14:02:12.980962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.692 [2024-07-11 14:02:12.981098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.692 [2024-07-11 14:02:12.981228] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.692 [2024-07-11 14:02:12.981238] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.692 [2024-07-11 14:02:12.981244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.692 [2024-07-11 14:02:12.983077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.692 [2024-07-11 14:02:12.992538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.692 [2024-07-11 14:02:12.992973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.993330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.692 [2024-07-11 14:02:12.993365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.692 [2024-07-11 14:02:12.993387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.692 [2024-07-11 14:02:12.993619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.692 [2024-07-11 14:02:12.993874] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.692 [2024-07-11 14:02:12.993883] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.692 [2024-07-11 14:02:12.993890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.692 [2024-07-11 14:02:12.996442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.692 [2024-07-11 14:02:13.005164] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.692 [2024-07-11 14:02:13.005545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.005737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.005749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.693 [2024-07-11 14:02:13.005760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.693 [2024-07-11 14:02:13.005876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.693 [2024-07-11 14:02:13.005991] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.693 [2024-07-11 14:02:13.006001] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.693 [2024-07-11 14:02:13.006007] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.693 [2024-07-11 14:02:13.007699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.693 [2024-07-11 14:02:13.017112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.693 [2024-07-11 14:02:13.018286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.018497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.018511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.693 [2024-07-11 14:02:13.018519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.693 [2024-07-11 14:02:13.018650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.693 [2024-07-11 14:02:13.018759] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.693 [2024-07-11 14:02:13.018769] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.693 [2024-07-11 14:02:13.018775] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.693 [2024-07-11 14:02:13.020666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.693 [2024-07-11 14:02:13.029164] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.693 [2024-07-11 14:02:13.029586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.029729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.029741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.693 [2024-07-11 14:02:13.029748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.693 [2024-07-11 14:02:13.029912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.693 [2024-07-11 14:02:13.030075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.693 [2024-07-11 14:02:13.030085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.693 [2024-07-11 14:02:13.030091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.693 [2024-07-11 14:02:13.031817] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.693 [2024-07-11 14:02:13.041082] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.693 [2024-07-11 14:02:13.041463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.041649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.041661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.693 [2024-07-11 14:02:13.041669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.693 [2024-07-11 14:02:13.041805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.693 [2024-07-11 14:02:13.041923] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.693 [2024-07-11 14:02:13.041932] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.693 [2024-07-11 14:02:13.041939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.693 [2024-07-11 14:02:13.043906] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.693 [2024-07-11 14:02:13.052994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.693 [2024-07-11 14:02:13.053424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.053641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.053653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.693 [2024-07-11 14:02:13.053661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.693 [2024-07-11 14:02:13.053763] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.693 [2024-07-11 14:02:13.053866] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.693 [2024-07-11 14:02:13.053876] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.693 [2024-07-11 14:02:13.053883] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.693 [2024-07-11 14:02:13.055477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.693 [2024-07-11 14:02:13.065081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.693 [2024-07-11 14:02:13.065473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.065741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.065753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.693 [2024-07-11 14:02:13.065761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.693 [2024-07-11 14:02:13.065832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.693 [2024-07-11 14:02:13.065951] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.693 [2024-07-11 14:02:13.065960] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.693 [2024-07-11 14:02:13.065966] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.693 [2024-07-11 14:02:13.067920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.693 [2024-07-11 14:02:13.076870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.693 [2024-07-11 14:02:13.077283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.077552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.077564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.693 [2024-07-11 14:02:13.077571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.693 [2024-07-11 14:02:13.077692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.693 [2024-07-11 14:02:13.077810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.693 [2024-07-11 14:02:13.077820] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.693 [2024-07-11 14:02:13.077826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.693 [2024-07-11 14:02:13.079644] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.693 [2024-07-11 14:02:13.088998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.693 [2024-07-11 14:02:13.089428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.089674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.089686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.693 [2024-07-11 14:02:13.089693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.693 [2024-07-11 14:02:13.089798] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.693 [2024-07-11 14:02:13.089919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.693 [2024-07-11 14:02:13.089928] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.693 [2024-07-11 14:02:13.089936] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.693 [2024-07-11 14:02:13.091779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.693 [2024-07-11 14:02:13.101083] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.693 [2024-07-11 14:02:13.101500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.101679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.101692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.693 [2024-07-11 14:02:13.101700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.693 [2024-07-11 14:02:13.101828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.693 [2024-07-11 14:02:13.101957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.693 [2024-07-11 14:02:13.101967] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.693 [2024-07-11 14:02:13.101975] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.693 [2024-07-11 14:02:13.103988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.693 [2024-07-11 14:02:13.113390] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.693 [2024-07-11 14:02:13.113791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.114058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.693 [2024-07-11 14:02:13.114071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.694 [2024-07-11 14:02:13.114079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.694 [2024-07-11 14:02:13.114212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.694 [2024-07-11 14:02:13.114328] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.694 [2024-07-11 14:02:13.114338] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.694 [2024-07-11 14:02:13.114346] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.694 [2024-07-11 14:02:13.116278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.694 [2024-07-11 14:02:13.125606] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.694 [2024-07-11 14:02:13.126036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.694 [2024-07-11 14:02:13.126293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.694 [2024-07-11 14:02:13.126309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.694 [2024-07-11 14:02:13.126317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.694 [2024-07-11 14:02:13.126453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.694 [2024-07-11 14:02:13.126590] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.694 [2024-07-11 14:02:13.126600] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.694 [2024-07-11 14:02:13.126608] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.694 [2024-07-11 14:02:13.128666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.694 [2024-07-11 14:02:13.137868] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.694 [2024-07-11 14:02:13.138279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.694 [2024-07-11 14:02:13.138474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.694 [2024-07-11 14:02:13.138487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.694 [2024-07-11 14:02:13.138496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.694 [2024-07-11 14:02:13.138608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.955 [2024-07-11 14:02:13.138770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.955 [2024-07-11 14:02:13.138781] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.955 [2024-07-11 14:02:13.138789] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.955 [2024-07-11 14:02:13.140824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.955 [2024-07-11 14:02:13.149955] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.955 [2024-07-11 14:02:13.150347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.150553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.150565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.956 [2024-07-11 14:02:13.150573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.956 [2024-07-11 14:02:13.150705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.956 [2024-07-11 14:02:13.150839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.956 [2024-07-11 14:02:13.150848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.956 [2024-07-11 14:02:13.150859] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.956 [2024-07-11 14:02:13.152772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.956 [2024-07-11 14:02:13.161981] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.956 [2024-07-11 14:02:13.162321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.162629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.162661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.956 [2024-07-11 14:02:13.162683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.956 [2024-07-11 14:02:13.163113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.956 [2024-07-11 14:02:13.163489] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.956 [2024-07-11 14:02:13.163499] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.956 [2024-07-11 14:02:13.163506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.956 [2024-07-11 14:02:13.165107] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.956 [2024-07-11 14:02:13.173991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.956 [2024-07-11 14:02:13.174412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.174545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.174577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.956 [2024-07-11 14:02:13.174599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.956 [2024-07-11 14:02:13.174977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.956 [2024-07-11 14:02:13.175181] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.956 [2024-07-11 14:02:13.175208] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.956 [2024-07-11 14:02:13.175215] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.956 [2024-07-11 14:02:13.177036] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.956 [2024-07-11 14:02:13.185900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.956 [2024-07-11 14:02:13.186228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.186328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.186339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.956 [2024-07-11 14:02:13.186346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.956 [2024-07-11 14:02:13.186467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.956 [2024-07-11 14:02:13.186575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.956 [2024-07-11 14:02:13.186585] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.956 [2024-07-11 14:02:13.186595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.956 [2024-07-11 14:02:13.188313] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.956 [2024-07-11 14:02:13.197931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.956 [2024-07-11 14:02:13.198318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.198531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.198563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.956 [2024-07-11 14:02:13.198585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.956 [2024-07-11 14:02:13.199015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.956 [2024-07-11 14:02:13.199217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.956 [2024-07-11 14:02:13.199227] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.956 [2024-07-11 14:02:13.199234] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.956 [2024-07-11 14:02:13.200947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.956 [2024-07-11 14:02:13.209784] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.956 [2024-07-11 14:02:13.210176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.210354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.210366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.956 [2024-07-11 14:02:13.210373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.956 [2024-07-11 14:02:13.210467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.956 [2024-07-11 14:02:13.210589] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.956 [2024-07-11 14:02:13.210597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.956 [2024-07-11 14:02:13.210604] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.956 [2024-07-11 14:02:13.212222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.956 [2024-07-11 14:02:13.221603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.956 [2024-07-11 14:02:13.222484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.222713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.956 [2024-07-11 14:02:13.222727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.956 [2024-07-11 14:02:13.222734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.956 [2024-07-11 14:02:13.222877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.956 [2024-07-11 14:02:13.222986] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.956 [2024-07-11 14:02:13.222995] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.956 [2024-07-11 14:02:13.223002] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.956 [2024-07-11 14:02:13.224768] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.956 [2024-07-11 14:02:13.233433] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.956 [2024-07-11 14:02:13.233711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.233912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.233946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.957 [2024-07-11 14:02:13.233968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.957 [2024-07-11 14:02:13.234317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.957 [2024-07-11 14:02:13.234490] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.957 [2024-07-11 14:02:13.234500] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.957 [2024-07-11 14:02:13.234507] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.957 [2024-07-11 14:02:13.236329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.957 [2024-07-11 14:02:13.245321] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.957 [2024-07-11 14:02:13.245643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.245791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.245802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.957 [2024-07-11 14:02:13.245809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.957 [2024-07-11 14:02:13.245905] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.957 [2024-07-11 14:02:13.246028] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.957 [2024-07-11 14:02:13.246036] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.957 [2024-07-11 14:02:13.246042] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.957 [2024-07-11 14:02:13.247875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.957 [2024-07-11 14:02:13.257242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.957 [2024-07-11 14:02:13.257552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.257689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.257701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.957 [2024-07-11 14:02:13.257708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.957 [2024-07-11 14:02:13.257822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.957 [2024-07-11 14:02:13.257906] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.957 [2024-07-11 14:02:13.257915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.957 [2024-07-11 14:02:13.257921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.957 [2024-07-11 14:02:13.259541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.957 [2024-07-11 14:02:13.269214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.957 [2024-07-11 14:02:13.269554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.269755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.269787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.957 [2024-07-11 14:02:13.269808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.957 [2024-07-11 14:02:13.270302] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.957 [2024-07-11 14:02:13.270404] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.957 [2024-07-11 14:02:13.270414] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.957 [2024-07-11 14:02:13.270420] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.957 [2024-07-11 14:02:13.272140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.957 [2024-07-11 14:02:13.281205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.957 [2024-07-11 14:02:13.281587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.281816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.281847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.957 [2024-07-11 14:02:13.281869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.957 [2024-07-11 14:02:13.282149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.957 [2024-07-11 14:02:13.282494] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.957 [2024-07-11 14:02:13.282504] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.957 [2024-07-11 14:02:13.282511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.957 [2024-07-11 14:02:13.284245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.957 [2024-07-11 14:02:13.293259] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.957 [2024-07-11 14:02:13.293626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.293942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.293973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.957 [2024-07-11 14:02:13.293995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.957 [2024-07-11 14:02:13.294339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.957 [2024-07-11 14:02:13.294631] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.957 [2024-07-11 14:02:13.294641] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.957 [2024-07-11 14:02:13.294647] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.957 [2024-07-11 14:02:13.296353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.957 [2024-07-11 14:02:13.305114] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.957 [2024-07-11 14:02:13.305460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.305707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.305738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.957 [2024-07-11 14:02:13.305761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.957 [2024-07-11 14:02:13.306140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.957 [2024-07-11 14:02:13.306533] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.957 [2024-07-11 14:02:13.306560] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.957 [2024-07-11 14:02:13.306581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.957 [2024-07-11 14:02:13.308557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.957 [2024-07-11 14:02:13.317070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.957 [2024-07-11 14:02:13.317391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.957 [2024-07-11 14:02:13.317579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.317611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.958 [2024-07-11 14:02:13.317633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.958 [2024-07-11 14:02:13.317931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.958 [2024-07-11 14:02:13.318040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.958 [2024-07-11 14:02:13.318049] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.958 [2024-07-11 14:02:13.318056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.958 [2024-07-11 14:02:13.319820] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.958 [2024-07-11 14:02:13.328891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.958 [2024-07-11 14:02:13.329287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.329528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.329559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.958 [2024-07-11 14:02:13.329581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.958 [2024-07-11 14:02:13.329851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.958 [2024-07-11 14:02:13.329960] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.958 [2024-07-11 14:02:13.329969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.958 [2024-07-11 14:02:13.329975] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.958 [2024-07-11 14:02:13.331656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.958 [2024-07-11 14:02:13.340843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.958 [2024-07-11 14:02:13.341270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.341418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.341449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.958 [2024-07-11 14:02:13.341479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.958 [2024-07-11 14:02:13.341859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.958 [2024-07-11 14:02:13.342046] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.958 [2024-07-11 14:02:13.342055] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.958 [2024-07-11 14:02:13.342062] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.958 [2024-07-11 14:02:13.343853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.958 [2024-07-11 14:02:13.352762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.958 [2024-07-11 14:02:13.353180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.353379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.353411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.958 [2024-07-11 14:02:13.353433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.958 [2024-07-11 14:02:13.353757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.958 [2024-07-11 14:02:13.353857] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.958 [2024-07-11 14:02:13.353867] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.958 [2024-07-11 14:02:13.353873] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.958 [2024-07-11 14:02:13.355524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.958 [2024-07-11 14:02:13.364582] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.958 [2024-07-11 14:02:13.364869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.365197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.365231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.958 [2024-07-11 14:02:13.365254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.958 [2024-07-11 14:02:13.365683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.958 [2024-07-11 14:02:13.365870] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.958 [2024-07-11 14:02:13.365880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.958 [2024-07-11 14:02:13.365886] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.958 [2024-07-11 14:02:13.367570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.958 [2024-07-11 14:02:13.376319] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.958 [2024-07-11 14:02:13.376677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.376966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.376998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.958 [2024-07-11 14:02:13.377026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.958 [2024-07-11 14:02:13.377421] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.958 [2024-07-11 14:02:13.377649] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.958 [2024-07-11 14:02:13.377659] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.958 [2024-07-11 14:02:13.377665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.958 [2024-07-11 14:02:13.380263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.958 [2024-07-11 14:02:13.388784] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.958 [2024-07-11 14:02:13.389148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.389344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.389356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.958 [2024-07-11 14:02:13.389363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.958 [2024-07-11 14:02:13.389507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.958 [2024-07-11 14:02:13.389650] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.958 [2024-07-11 14:02:13.389660] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.958 [2024-07-11 14:02:13.389667] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.958 [2024-07-11 14:02:13.391363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:10.958 [2024-07-11 14:02:13.400855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:10.958 [2024-07-11 14:02:13.401274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.401475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:10.958 [2024-07-11 14:02:13.401507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:10.959 [2024-07-11 14:02:13.401530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:10.959 [2024-07-11 14:02:13.401943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:10.959 [2024-07-11 14:02:13.402053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:10.959 [2024-07-11 14:02:13.402062] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:10.959 [2024-07-11 14:02:13.402068] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:10.959 [2024-07-11 14:02:13.403960] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.218 [2024-07-11 14:02:13.412950] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.218 [2024-07-11 14:02:13.413345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.413562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.413573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.218 [2024-07-11 14:02:13.413581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.218 [2024-07-11 14:02:13.413699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.218 [2024-07-11 14:02:13.413829] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.218 [2024-07-11 14:02:13.413838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.218 [2024-07-11 14:02:13.413845] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.218 [2024-07-11 14:02:13.415669] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.218 [2024-07-11 14:02:13.424776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.218 [2024-07-11 14:02:13.425184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.425436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.425468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.218 [2024-07-11 14:02:13.425490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.218 [2024-07-11 14:02:13.425609] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.218 [2024-07-11 14:02:13.425662] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.218 [2024-07-11 14:02:13.425670] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.218 [2024-07-11 14:02:13.425676] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.218 [2024-07-11 14:02:13.427231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.218 [2024-07-11 14:02:13.436535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.218 [2024-07-11 14:02:13.436929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.437251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.437285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.218 [2024-07-11 14:02:13.437308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.218 [2024-07-11 14:02:13.437739] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.218 [2024-07-11 14:02:13.438106] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.218 [2024-07-11 14:02:13.438116] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.218 [2024-07-11 14:02:13.438123] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.218 [2024-07-11 14:02:13.439817] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.218 [2024-07-11 14:02:13.448419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.218 [2024-07-11 14:02:13.448873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.449197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.449231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.218 [2024-07-11 14:02:13.449253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.218 [2024-07-11 14:02:13.449634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.218 [2024-07-11 14:02:13.449925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.218 [2024-07-11 14:02:13.449956] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.218 [2024-07-11 14:02:13.449962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.218 [2024-07-11 14:02:13.451740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.218 [2024-07-11 14:02:13.460265] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.218 [2024-07-11 14:02:13.460702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.460936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.460968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.218 [2024-07-11 14:02:13.460990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.218 [2024-07-11 14:02:13.461238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.218 [2024-07-11 14:02:13.461325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.218 [2024-07-11 14:02:13.461334] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.218 [2024-07-11 14:02:13.461341] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.218 [2024-07-11 14:02:13.463054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.218 [2024-07-11 14:02:13.472207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.218 [2024-07-11 14:02:13.472528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.472792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.472804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.218 [2024-07-11 14:02:13.472811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.218 [2024-07-11 14:02:13.472890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.218 [2024-07-11 14:02:13.472998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.218 [2024-07-11 14:02:13.473006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.218 [2024-07-11 14:02:13.473012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.218 [2024-07-11 14:02:13.474797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.218 [2024-07-11 14:02:13.484075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.218 [2024-07-11 14:02:13.484452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.484723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.484755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.218 [2024-07-11 14:02:13.484777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.218 [2024-07-11 14:02:13.485156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.218 [2024-07-11 14:02:13.485606] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.218 [2024-07-11 14:02:13.485638] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.218 [2024-07-11 14:02:13.485660] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.218 [2024-07-11 14:02:13.487661] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.218 [2024-07-11 14:02:13.496033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.218 [2024-07-11 14:02:13.496493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.496768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.496800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.218 [2024-07-11 14:02:13.496823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.218 [2024-07-11 14:02:13.497247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.218 [2024-07-11 14:02:13.497378] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.218 [2024-07-11 14:02:13.497388] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.218 [2024-07-11 14:02:13.497395] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.218 [2024-07-11 14:02:13.499112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.218 [2024-07-11 14:02:13.507866] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.218 [2024-07-11 14:02:13.508273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.508547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.218 [2024-07-11 14:02:13.508579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.218 [2024-07-11 14:02:13.508602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.218 [2024-07-11 14:02:13.508881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.219 [2024-07-11 14:02:13.509013] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.219 [2024-07-11 14:02:13.509022] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.219 [2024-07-11 14:02:13.509029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.219 [2024-07-11 14:02:13.510929] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.219 [2024-07-11 14:02:13.519671] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.219 [2024-07-11 14:02:13.520067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.520352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.520386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.219 [2024-07-11 14:02:13.520409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.219 [2024-07-11 14:02:13.520638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.219 [2024-07-11 14:02:13.521021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.219 [2024-07-11 14:02:13.521047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.219 [2024-07-11 14:02:13.521073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.219 [2024-07-11 14:02:13.522851] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.219 [2024-07-11 14:02:13.531544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.219 [2024-07-11 14:02:13.531959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.532234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.532268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.219 [2024-07-11 14:02:13.532291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.219 [2024-07-11 14:02:13.532670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.219 [2024-07-11 14:02:13.533030] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.219 [2024-07-11 14:02:13.533040] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.219 [2024-07-11 14:02:13.533046] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.219 [2024-07-11 14:02:13.534704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.219 [2024-07-11 14:02:13.543427] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.219 [2024-07-11 14:02:13.543772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.544000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.544032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.219 [2024-07-11 14:02:13.544054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.219 [2024-07-11 14:02:13.544273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.219 [2024-07-11 14:02:13.544375] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.219 [2024-07-11 14:02:13.544384] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.219 [2024-07-11 14:02:13.544391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.219 [2024-07-11 14:02:13.546081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.219 [2024-07-11 14:02:13.555324] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.219 [2024-07-11 14:02:13.555707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.555986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.556018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.219 [2024-07-11 14:02:13.556040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.219 [2024-07-11 14:02:13.556432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.219 [2024-07-11 14:02:13.556766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.219 [2024-07-11 14:02:13.556791] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.219 [2024-07-11 14:02:13.556813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.219 [2024-07-11 14:02:13.558786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.219 [2024-07-11 14:02:13.567179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.219 [2024-07-11 14:02:13.567536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.567815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.567848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.219 [2024-07-11 14:02:13.567869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.219 [2024-07-11 14:02:13.568215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.219 [2024-07-11 14:02:13.568346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.219 [2024-07-11 14:02:13.568355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.219 [2024-07-11 14:02:13.568362] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.219 [2024-07-11 14:02:13.570024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.219 [2024-07-11 14:02:13.578942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.219 [2024-07-11 14:02:13.579333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.579678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.579710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.219 [2024-07-11 14:02:13.579733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.219 [2024-07-11 14:02:13.580062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.219 [2024-07-11 14:02:13.580305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.219 [2024-07-11 14:02:13.580315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.219 [2024-07-11 14:02:13.580321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.219 [2024-07-11 14:02:13.582590] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.219 [2024-07-11 14:02:13.591516] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.219 [2024-07-11 14:02:13.591807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.592014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.592046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.219 [2024-07-11 14:02:13.592068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.219 [2024-07-11 14:02:13.592514] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.219 [2024-07-11 14:02:13.592638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.219 [2024-07-11 14:02:13.592647] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.219 [2024-07-11 14:02:13.592653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.219 [2024-07-11 14:02:13.594414] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.219 [2024-07-11 14:02:13.603415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.219 [2024-07-11 14:02:13.603815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.604057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.604088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.219 [2024-07-11 14:02:13.604111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.219 [2024-07-11 14:02:13.604505] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.219 [2024-07-11 14:02:13.604889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.219 [2024-07-11 14:02:13.604915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.219 [2024-07-11 14:02:13.604936] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.219 [2024-07-11 14:02:13.606925] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.219 [2024-07-11 14:02:13.615272] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.219 [2024-07-11 14:02:13.615707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.615983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.616015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.219 [2024-07-11 14:02:13.616037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.219 [2024-07-11 14:02:13.616313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.219 [2024-07-11 14:02:13.616444] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.219 [2024-07-11 14:02:13.616454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.219 [2024-07-11 14:02:13.616460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.219 [2024-07-11 14:02:13.618143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.219 [2024-07-11 14:02:13.627134] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.219 [2024-07-11 14:02:13.627501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.627826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.219 [2024-07-11 14:02:13.627858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.220 [2024-07-11 14:02:13.627880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.220 [2024-07-11 14:02:13.628392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.220 [2024-07-11 14:02:13.628508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.220 [2024-07-11 14:02:13.628516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.220 [2024-07-11 14:02:13.628522] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.220 [2024-07-11 14:02:13.630134] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.220 [2024-07-11 14:02:13.638922] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.220 [2024-07-11 14:02:13.639325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.220 [2024-07-11 14:02:13.639521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.220 [2024-07-11 14:02:13.639533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.220 [2024-07-11 14:02:13.639540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.220 [2024-07-11 14:02:13.639639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.220 [2024-07-11 14:02:13.639769] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.220 [2024-07-11 14:02:13.639777] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.220 [2024-07-11 14:02:13.639783] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.220 [2024-07-11 14:02:13.641607] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.220 [2024-07-11 14:02:13.650942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.220 [2024-07-11 14:02:13.651314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.220 [2024-07-11 14:02:13.651562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.220 [2024-07-11 14:02:13.651594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.220 [2024-07-11 14:02:13.651617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.220 [2024-07-11 14:02:13.651946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.220 [2024-07-11 14:02:13.652220] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.220 [2024-07-11 14:02:13.652230] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.220 [2024-07-11 14:02:13.652237] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.220 [2024-07-11 14:02:13.654055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.220 [2024-07-11 14:02:13.662841] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.220 [2024-07-11 14:02:13.663293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.220 [2024-07-11 14:02:13.663606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.220 [2024-07-11 14:02:13.663638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.220 [2024-07-11 14:02:13.663664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.220 [2024-07-11 14:02:13.663748] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.220 [2024-07-11 14:02:13.663848] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.220 [2024-07-11 14:02:13.663856] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.220 [2024-07-11 14:02:13.663862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.220 [2024-07-11 14:02:13.665738] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.482 [2024-07-11 14:02:13.674826] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.482 [2024-07-11 14:02:13.675215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.482 [2024-07-11 14:02:13.675455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.482 [2024-07-11 14:02:13.675487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.482 [2024-07-11 14:02:13.675511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.482 [2024-07-11 14:02:13.675992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.482 [2024-07-11 14:02:13.676288] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.482 [2024-07-11 14:02:13.676316] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.482 [2024-07-11 14:02:13.676337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.482 [2024-07-11 14:02:13.678414] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.482 [2024-07-11 14:02:13.686639] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.482 [2024-07-11 14:02:13.687036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.482 [2024-07-11 14:02:13.687216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.482 [2024-07-11 14:02:13.687229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.482 [2024-07-11 14:02:13.687237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.482 [2024-07-11 14:02:13.687351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.482 [2024-07-11 14:02:13.687466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.482 [2024-07-11 14:02:13.687475] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.482 [2024-07-11 14:02:13.687482] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.482 [2024-07-11 14:02:13.689212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.482 [2024-07-11 14:02:13.698578] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.482 [2024-07-11 14:02:13.698994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.482 [2024-07-11 14:02:13.699226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.482 [2024-07-11 14:02:13.699260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.482 [2024-07-11 14:02:13.699282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.482 [2024-07-11 14:02:13.699604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.482 [2024-07-11 14:02:13.699704] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.482 [2024-07-11 14:02:13.699714] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.482 [2024-07-11 14:02:13.699720] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.482 [2024-07-11 14:02:13.701405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.482 [2024-07-11 14:02:13.710521] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.482 [2024-07-11 14:02:13.710891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.482 [2024-07-11 14:02:13.711091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.482 [2024-07-11 14:02:13.711123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.482 [2024-07-11 14:02:13.711153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.482 [2024-07-11 14:02:13.711501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.482 [2024-07-11 14:02:13.711712] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.482 [2024-07-11 14:02:13.711722] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.482 [2024-07-11 14:02:13.711729] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.483 [2024-07-11 14:02:13.714186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.483 [2024-07-11 14:02:13.723045] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.483 [2024-07-11 14:02:13.723476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.723789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.723821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.483 [2024-07-11 14:02:13.723844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.483 [2024-07-11 14:02:13.724237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.483 [2024-07-11 14:02:13.724572] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.483 [2024-07-11 14:02:13.724598] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.483 [2024-07-11 14:02:13.724629] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.483 [2024-07-11 14:02:13.726347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.483 [2024-07-11 14:02:13.734969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.483 [2024-07-11 14:02:13.735289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.735493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.735525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.483 [2024-07-11 14:02:13.735547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.483 [2024-07-11 14:02:13.735878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.483 [2024-07-11 14:02:13.736321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.483 [2024-07-11 14:02:13.736348] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.483 [2024-07-11 14:02:13.736369] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.483 [2024-07-11 14:02:13.738231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.483 [2024-07-11 14:02:13.746815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.483 [2024-07-11 14:02:13.747131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.747390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.747403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.483 [2024-07-11 14:02:13.747412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.483 [2024-07-11 14:02:13.747533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.483 [2024-07-11 14:02:13.747648] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.483 [2024-07-11 14:02:13.747657] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.483 [2024-07-11 14:02:13.747664] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.483 [2024-07-11 14:02:13.749439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.483 [2024-07-11 14:02:13.758663] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.483 [2024-07-11 14:02:13.759091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.759372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.759406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.483 [2024-07-11 14:02:13.759430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.483 [2024-07-11 14:02:13.759565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.483 [2024-07-11 14:02:13.759724] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.483 [2024-07-11 14:02:13.759734] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.483 [2024-07-11 14:02:13.759740] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.483 [2024-07-11 14:02:13.761609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.483 [2024-07-11 14:02:13.770692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.483 [2024-07-11 14:02:13.771016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.771290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.771302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.483 [2024-07-11 14:02:13.771309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.483 [2024-07-11 14:02:13.771438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.483 [2024-07-11 14:02:13.771567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.483 [2024-07-11 14:02:13.771576] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.483 [2024-07-11 14:02:13.771583] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.483 [2024-07-11 14:02:13.773290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.483 [2024-07-11 14:02:13.782459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.483 [2024-07-11 14:02:13.782733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.782981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.783013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.483 [2024-07-11 14:02:13.783036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.483 [2024-07-11 14:02:13.783431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.483 [2024-07-11 14:02:13.783702] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.483 [2024-07-11 14:02:13.783712] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.483 [2024-07-11 14:02:13.783718] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.483 [2024-07-11 14:02:13.785432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.483 [2024-07-11 14:02:13.794321] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.483 [2024-07-11 14:02:13.794661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.794915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.794947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.483 [2024-07-11 14:02:13.794969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.483 [2024-07-11 14:02:13.795264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.483 [2024-07-11 14:02:13.795648] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.483 [2024-07-11 14:02:13.795673] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.483 [2024-07-11 14:02:13.795694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.483 [2024-07-11 14:02:13.797441] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.483 [2024-07-11 14:02:13.806082] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.483 [2024-07-11 14:02:13.806511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.806839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.806872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.483 [2024-07-11 14:02:13.806894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.483 [2024-07-11 14:02:13.807013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.483 [2024-07-11 14:02:13.807106] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.483 [2024-07-11 14:02:13.807116] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.483 [2024-07-11 14:02:13.807123] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.483 [2024-07-11 14:02:13.808872] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.483 [2024-07-11 14:02:13.818100] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.483 [2024-07-11 14:02:13.818439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.818605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.818637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.483 [2024-07-11 14:02:13.818660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.483 [2024-07-11 14:02:13.818939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.483 [2024-07-11 14:02:13.819203] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.483 [2024-07-11 14:02:13.819216] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.483 [2024-07-11 14:02:13.819223] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.483 [2024-07-11 14:02:13.820972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.483 [2024-07-11 14:02:13.829951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.483 [2024-07-11 14:02:13.830310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.830612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.483 [2024-07-11 14:02:13.830643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.483 [2024-07-11 14:02:13.830665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.483 [2024-07-11 14:02:13.830997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.483 [2024-07-11 14:02:13.831443] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.483 [2024-07-11 14:02:13.831470] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.484 [2024-07-11 14:02:13.831491] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.484 [2024-07-11 14:02:13.833606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.484 [2024-07-11 14:02:13.841589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.484 [2024-07-11 14:02:13.841952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.842264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.842298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.484 [2024-07-11 14:02:13.842321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.484 [2024-07-11 14:02:13.842700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.484 [2024-07-11 14:02:13.842915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.484 [2024-07-11 14:02:13.842927] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.484 [2024-07-11 14:02:13.842937] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.484 [2024-07-11 14:02:13.845324] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.484 [2024-07-11 14:02:13.854074] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.484 [2024-07-11 14:02:13.854389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.854652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.854684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.484 [2024-07-11 14:02:13.854711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.484 [2024-07-11 14:02:13.854793] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.484 [2024-07-11 14:02:13.854919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.484 [2024-07-11 14:02:13.854929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.484 [2024-07-11 14:02:13.854938] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.484 [2024-07-11 14:02:13.856744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.484 [2024-07-11 14:02:13.865998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.484 [2024-07-11 14:02:13.866435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.866758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.866790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.484 [2024-07-11 14:02:13.866812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.484 [2024-07-11 14:02:13.867064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.484 [2024-07-11 14:02:13.867209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.484 [2024-07-11 14:02:13.867220] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.484 [2024-07-11 14:02:13.867227] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.484 [2024-07-11 14:02:13.868946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.484 [2024-07-11 14:02:13.877825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.484 [2024-07-11 14:02:13.878149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.878399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.878411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.484 [2024-07-11 14:02:13.878418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.484 [2024-07-11 14:02:13.878577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.484 [2024-07-11 14:02:13.878706] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.484 [2024-07-11 14:02:13.878716] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.484 [2024-07-11 14:02:13.878722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.484 [2024-07-11 14:02:13.880346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.484 [2024-07-11 14:02:13.889701] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.484 [2024-07-11 14:02:13.890070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.890315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.890327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.484 [2024-07-11 14:02:13.890334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.484 [2024-07-11 14:02:13.890497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.484 [2024-07-11 14:02:13.890599] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.484 [2024-07-11 14:02:13.890609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.484 [2024-07-11 14:02:13.890616] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.484 [2024-07-11 14:02:13.892320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.484 [2024-07-11 14:02:13.901661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.484 [2024-07-11 14:02:13.902082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.902346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.902359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.484 [2024-07-11 14:02:13.902367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.484 [2024-07-11 14:02:13.902496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.484 [2024-07-11 14:02:13.902610] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.484 [2024-07-11 14:02:13.902620] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.484 [2024-07-11 14:02:13.902627] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.484 [2024-07-11 14:02:13.904338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.484 [2024-07-11 14:02:13.913588] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.484 [2024-07-11 14:02:13.913977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.914227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.914263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.484 [2024-07-11 14:02:13.914287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.484 [2024-07-11 14:02:13.914818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.484 [2024-07-11 14:02:13.915165] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.484 [2024-07-11 14:02:13.915176] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.484 [2024-07-11 14:02:13.915183] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.484 [2024-07-11 14:02:13.916870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.484 [2024-07-11 14:02:13.925442] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.484 [2024-07-11 14:02:13.925837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.926010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.484 [2024-07-11 14:02:13.926021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.484 [2024-07-11 14:02:13.926028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.484 [2024-07-11 14:02:13.926135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.484 [2024-07-11 14:02:13.926297] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.484 [2024-07-11 14:02:13.926306] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.484 [2024-07-11 14:02:13.926312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.484 [2024-07-11 14:02:13.927797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.783 [2024-07-11 14:02:13.937422] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.783 [2024-07-11 14:02:13.937832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.783 [2024-07-11 14:02:13.938007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.783 [2024-07-11 14:02:13.938019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.783 [2024-07-11 14:02:13.938026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.783 [2024-07-11 14:02:13.938168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.783 [2024-07-11 14:02:13.938272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.783 [2024-07-11 14:02:13.938280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.784 [2024-07-11 14:02:13.938288] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.784 [2024-07-11 14:02:13.940103] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.784 [2024-07-11 14:02:13.949296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.784 [2024-07-11 14:02:13.949670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:13.949860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:13.949892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.784 [2024-07-11 14:02:13.949915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.784 [2024-07-11 14:02:13.950083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.784 [2024-07-11 14:02:13.950195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.784 [2024-07-11 14:02:13.950205] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.784 [2024-07-11 14:02:13.950212] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.784 [2024-07-11 14:02:13.952038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.784 [2024-07-11 14:02:13.961058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.784 [2024-07-11 14:02:13.961440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:13.961691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:13.961722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.784 [2024-07-11 14:02:13.961745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.784 [2024-07-11 14:02:13.962124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.784 [2024-07-11 14:02:13.962277] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.784 [2024-07-11 14:02:13.962287] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.784 [2024-07-11 14:02:13.962294] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.784 [2024-07-11 14:02:13.964178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.784 [2024-07-11 14:02:13.973009] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.784 [2024-07-11 14:02:13.973349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:13.973592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:13.973624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.784 [2024-07-11 14:02:13.973647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.784 [2024-07-11 14:02:13.973927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.784 [2024-07-11 14:02:13.974292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.784 [2024-07-11 14:02:13.974306] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.784 [2024-07-11 14:02:13.974315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.784 [2024-07-11 14:02:13.976723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.784 [2024-07-11 14:02:13.985294] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.784 [2024-07-11 14:02:13.985618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:13.985895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:13.985926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.784 [2024-07-11 14:02:13.985949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.784 [2024-07-11 14:02:13.986287] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.784 [2024-07-11 14:02:13.986403] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.784 [2024-07-11 14:02:13.986413] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.784 [2024-07-11 14:02:13.986419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.784 [2024-07-11 14:02:13.988328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.784 [2024-07-11 14:02:13.997145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.784 [2024-07-11 14:02:13.997575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:13.997850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:13.997882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.784 [2024-07-11 14:02:13.997904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.784 [2024-07-11 14:02:13.998250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.784 [2024-07-11 14:02:13.998463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.784 [2024-07-11 14:02:13.998473] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.784 [2024-07-11 14:02:13.998479] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.784 [2024-07-11 14:02:14.000123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.784 [2024-07-11 14:02:14.009040] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.784 [2024-07-11 14:02:14.009474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:14.009802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:14.009843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.784 [2024-07-11 14:02:14.009866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.784 [2024-07-11 14:02:14.010213] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.784 [2024-07-11 14:02:14.010390] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.784 [2024-07-11 14:02:14.010400] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.784 [2024-07-11 14:02:14.010407] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.784 [2024-07-11 14:02:14.012226] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.784 [2024-07-11 14:02:14.020844] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.784 [2024-07-11 14:02:14.021270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:14.021571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:14.021603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.784 [2024-07-11 14:02:14.021626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.784 [2024-07-11 14:02:14.022115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.784 [2024-07-11 14:02:14.022262] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.784 [2024-07-11 14:02:14.022271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.784 [2024-07-11 14:02:14.022278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.784 [2024-07-11 14:02:14.023944] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.784 [2024-07-11 14:02:14.032627] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.784 [2024-07-11 14:02:14.033017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:14.033347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:14.033382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.784 [2024-07-11 14:02:14.033404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.784 [2024-07-11 14:02:14.033885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.784 [2024-07-11 14:02:14.034279] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.784 [2024-07-11 14:02:14.034306] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.784 [2024-07-11 14:02:14.034327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.784 [2024-07-11 14:02:14.036505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.784 [2024-07-11 14:02:14.045339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.784 [2024-07-11 14:02:14.045752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:14.046032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:14.046064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.784 [2024-07-11 14:02:14.046094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.784 [2024-07-11 14:02:14.046600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.784 [2024-07-11 14:02:14.046731] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.784 [2024-07-11 14:02:14.046740] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.784 [2024-07-11 14:02:14.046747] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.784 [2024-07-11 14:02:14.048581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.784 [2024-07-11 14:02:14.057229] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.784 [2024-07-11 14:02:14.057598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:14.057867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.784 [2024-07-11 14:02:14.057898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.784 [2024-07-11 14:02:14.057920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.784 [2024-07-11 14:02:14.058215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.784 [2024-07-11 14:02:14.058517] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.784 [2024-07-11 14:02:14.058526] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.785 [2024-07-11 14:02:14.058532] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.785 [2024-07-11 14:02:14.060244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.785 [2024-07-11 14:02:14.069166] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.785 [2024-07-11 14:02:14.069527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.069753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.069784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.785 [2024-07-11 14:02:14.069806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.785 [2024-07-11 14:02:14.070087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.785 [2024-07-11 14:02:14.070386] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.785 [2024-07-11 14:02:14.070426] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.785 [2024-07-11 14:02:14.070432] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.785 [2024-07-11 14:02:14.072135] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.785 [2024-07-11 14:02:14.081023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.785 [2024-07-11 14:02:14.081355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.081595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.081606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.785 [2024-07-11 14:02:14.081640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.785 [2024-07-11 14:02:14.082193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.785 [2024-07-11 14:02:14.082338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.785 [2024-07-11 14:02:14.082347] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.785 [2024-07-11 14:02:14.082354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.785 [2024-07-11 14:02:14.084089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.785 [2024-07-11 14:02:14.092853] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.785 [2024-07-11 14:02:14.093282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.093551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.093582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.785 [2024-07-11 14:02:14.093604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.785 [2024-07-11 14:02:14.093834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.785 [2024-07-11 14:02:14.094118] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.785 [2024-07-11 14:02:14.094143] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.785 [2024-07-11 14:02:14.094179] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.785 [2024-07-11 14:02:14.096158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.785 [2024-07-11 14:02:14.104754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.785 [2024-07-11 14:02:14.105145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.105494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.105525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.785 [2024-07-11 14:02:14.105548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.785 [2024-07-11 14:02:14.105768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.785 [2024-07-11 14:02:14.105914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.785 [2024-07-11 14:02:14.105927] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.785 [2024-07-11 14:02:14.105936] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.785 [2024-07-11 14:02:14.108480] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.785 [2024-07-11 14:02:14.116906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.785 [2024-07-11 14:02:14.117394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.117704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.117735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.785 [2024-07-11 14:02:14.117757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.785 [2024-07-11 14:02:14.118087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.785 [2024-07-11 14:02:14.118542] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.785 [2024-07-11 14:02:14.118571] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.785 [2024-07-11 14:02:14.118592] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.785 [2024-07-11 14:02:14.120444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.785 [2024-07-11 14:02:14.128728] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.785 [2024-07-11 14:02:14.129114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.129303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.129315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.785 [2024-07-11 14:02:14.129345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.785 [2024-07-11 14:02:14.129676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.785 [2024-07-11 14:02:14.130058] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.785 [2024-07-11 14:02:14.130083] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.785 [2024-07-11 14:02:14.130104] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.785 [2024-07-11 14:02:14.132152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.785 [2024-07-11 14:02:14.140417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.785 [2024-07-11 14:02:14.140821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.141086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.141098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.785 [2024-07-11 14:02:14.141106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.785 [2024-07-11 14:02:14.141214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.785 [2024-07-11 14:02:14.141348] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.785 [2024-07-11 14:02:14.141356] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.785 [2024-07-11 14:02:14.141363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.785 [2024-07-11 14:02:14.143300] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.785 [2024-07-11 14:02:14.152340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.785 [2024-07-11 14:02:14.152754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.153018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.153050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.785 [2024-07-11 14:02:14.153073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.785 [2024-07-11 14:02:14.153518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.785 [2024-07-11 14:02:14.153863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.785 [2024-07-11 14:02:14.153876] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.785 [2024-07-11 14:02:14.153883] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.785 [2024-07-11 14:02:14.155586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.785 [2024-07-11 14:02:14.164229] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.785 [2024-07-11 14:02:14.164545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.164794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.164826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.785 [2024-07-11 14:02:14.164847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.785 [2024-07-11 14:02:14.165291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.785 [2024-07-11 14:02:14.165475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.785 [2024-07-11 14:02:14.165501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.785 [2024-07-11 14:02:14.165523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.785 [2024-07-11 14:02:14.167499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.785 [2024-07-11 14:02:14.176224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.785 [2024-07-11 14:02:14.176592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.176840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.785 [2024-07-11 14:02:14.176872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.785 [2024-07-11 14:02:14.176894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.785 [2024-07-11 14:02:14.177338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.785 [2024-07-11 14:02:14.177542] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.785 [2024-07-11 14:02:14.177551] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.786 [2024-07-11 14:02:14.177557] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.786 [2024-07-11 14:02:14.179197] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.786 [2024-07-11 14:02:14.188065] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.786 [2024-07-11 14:02:14.188454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.786 [2024-07-11 14:02:14.188720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.786 [2024-07-11 14:02:14.188752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.786 [2024-07-11 14:02:14.188775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.786 [2024-07-11 14:02:14.189055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.786 [2024-07-11 14:02:14.189351] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.786 [2024-07-11 14:02:14.189379] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.786 [2024-07-11 14:02:14.189408] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.786 [2024-07-11 14:02:14.191264] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.786 [2024-07-11 14:02:14.199928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.786 [2024-07-11 14:02:14.200356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.786 [2024-07-11 14:02:14.200527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.786 [2024-07-11 14:02:14.200559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.786 [2024-07-11 14:02:14.200582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.786 [2024-07-11 14:02:14.200813] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.786 [2024-07-11 14:02:14.200950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.786 [2024-07-11 14:02:14.200959] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.786 [2024-07-11 14:02:14.200965] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.786 [2024-07-11 14:02:14.202794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.786 [2024-07-11 14:02:14.212179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.786 [2024-07-11 14:02:14.212529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.786 [2024-07-11 14:02:14.212837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.786 [2024-07-11 14:02:14.212869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.786 [2024-07-11 14:02:14.212891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.786 [2024-07-11 14:02:14.213060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.786 [2024-07-11 14:02:14.213195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.786 [2024-07-11 14:02:14.213205] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.786 [2024-07-11 14:02:14.213212] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.786 [2024-07-11 14:02:14.215146] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.786 [2024-07-11 14:02:14.223984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.786 [2024-07-11 14:02:14.224300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.786 [2024-07-11 14:02:14.224582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.786 [2024-07-11 14:02:14.224614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.786 [2024-07-11 14:02:14.224636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.786 [2024-07-11 14:02:14.225066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.786 [2024-07-11 14:02:14.225456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.786 [2024-07-11 14:02:14.225483] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.786 [2024-07-11 14:02:14.225505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:11.786 [2024-07-11 14:02:14.227436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.786 [2024-07-11 14:02:14.235859] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:11.786 [2024-07-11 14:02:14.236237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.786 [2024-07-11 14:02:14.236533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.786 [2024-07-11 14:02:14.236565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:11.786 [2024-07-11 14:02:14.236588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:11.786 [2024-07-11 14:02:14.236740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:11.786 [2024-07-11 14:02:14.236858] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:11.786 [2024-07-11 14:02:14.236871] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:11.786 [2024-07-11 14:02:14.236879] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.047 [2024-07-11 14:02:14.239420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.047 [2024-07-11 14:02:14.248108] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.047 [2024-07-11 14:02:14.248437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.248705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.248737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.047 [2024-07-11 14:02:14.248759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.047 [2024-07-11 14:02:14.249140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.047 [2024-07-11 14:02:14.249638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.047 [2024-07-11 14:02:14.249666] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.047 [2024-07-11 14:02:14.249688] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.047 [2024-07-11 14:02:14.251611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.047 [2024-07-11 14:02:14.259973] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.047 [2024-07-11 14:02:14.260384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.260693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.260726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.047 [2024-07-11 14:02:14.260749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.047 [2024-07-11 14:02:14.261034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.047 [2024-07-11 14:02:14.261130] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.047 [2024-07-11 14:02:14.261141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.047 [2024-07-11 14:02:14.261148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.047 [2024-07-11 14:02:14.262937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.047 [2024-07-11 14:02:14.272014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.047 [2024-07-11 14:02:14.272427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.272590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.272621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.047 [2024-07-11 14:02:14.272643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.047 [2024-07-11 14:02:14.273073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.047 [2024-07-11 14:02:14.273256] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.047 [2024-07-11 14:02:14.273268] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.047 [2024-07-11 14:02:14.273274] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.047 [2024-07-11 14:02:14.275122] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.047 [2024-07-11 14:02:14.284212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.047 [2024-07-11 14:02:14.284571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.284722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.284735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.047 [2024-07-11 14:02:14.284742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.047 [2024-07-11 14:02:14.284872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.047 [2024-07-11 14:02:14.284960] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.047 [2024-07-11 14:02:14.284969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.047 [2024-07-11 14:02:14.284976] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.047 [2024-07-11 14:02:14.286696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.047 [2024-07-11 14:02:14.296034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.047 [2024-07-11 14:02:14.296412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.296622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.296634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.047 [2024-07-11 14:02:14.296641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.047 [2024-07-11 14:02:14.296740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.047 [2024-07-11 14:02:14.296870] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.047 [2024-07-11 14:02:14.296879] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.047 [2024-07-11 14:02:14.296886] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.047 [2024-07-11 14:02:14.298457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.047 [2024-07-11 14:02:14.307800] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.047 [2024-07-11 14:02:14.308181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.308332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.308343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.047 [2024-07-11 14:02:14.308350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.047 [2024-07-11 14:02:14.308429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.047 [2024-07-11 14:02:14.308538] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.047 [2024-07-11 14:02:14.308546] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.047 [2024-07-11 14:02:14.308553] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.047 [2024-07-11 14:02:14.310195] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.047 [2024-07-11 14:02:14.319737] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.047 [2024-07-11 14:02:14.320039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.320347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.047 [2024-07-11 14:02:14.320383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.047 [2024-07-11 14:02:14.320406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.047 [2024-07-11 14:02:14.320787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.047 [2024-07-11 14:02:14.320900] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.047 [2024-07-11 14:02:14.320910] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.047 [2024-07-11 14:02:14.320916] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.047 [2024-07-11 14:02:14.322635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.047 [2024-07-11 14:02:14.331643] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.048 [2024-07-11 14:02:14.331953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.332193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.332227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.048 [2024-07-11 14:02:14.332250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.048 [2024-07-11 14:02:14.332580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.048 [2024-07-11 14:02:14.332892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.048 [2024-07-11 14:02:14.332901] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.048 [2024-07-11 14:02:14.332908] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.048 [2024-07-11 14:02:14.334558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.048 [2024-07-11 14:02:14.343344] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.048 [2024-07-11 14:02:14.343687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.343875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.343890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.048 [2024-07-11 14:02:14.343897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.048 [2024-07-11 14:02:14.343977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.048 [2024-07-11 14:02:14.344099] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.048 [2024-07-11 14:02:14.344107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.048 [2024-07-11 14:02:14.344113] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.048 [2024-07-11 14:02:14.345722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.048 [2024-07-11 14:02:14.355139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.048 [2024-07-11 14:02:14.355532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.355770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.355802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.048 [2024-07-11 14:02:14.355825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.048 [2024-07-11 14:02:14.356070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.048 [2024-07-11 14:02:14.356199] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.048 [2024-07-11 14:02:14.356208] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.048 [2024-07-11 14:02:14.356231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.048 [2024-07-11 14:02:14.358000] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.048 [2024-07-11 14:02:14.367043] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.048 [2024-07-11 14:02:14.367370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.367567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.367598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.048 [2024-07-11 14:02:14.367621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.048 [2024-07-11 14:02:14.368049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.048 [2024-07-11 14:02:14.368325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.048 [2024-07-11 14:02:14.368335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.048 [2024-07-11 14:02:14.368342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.048 [2024-07-11 14:02:14.371008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.048 [2024-07-11 14:02:14.379382] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.048 [2024-07-11 14:02:14.379752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.379940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.379951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.048 [2024-07-11 14:02:14.379961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.048 [2024-07-11 14:02:14.380044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.048 [2024-07-11 14:02:14.380175] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.048 [2024-07-11 14:02:14.380185] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.048 [2024-07-11 14:02:14.380209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.048 [2024-07-11 14:02:14.381952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.048 [2024-07-11 14:02:14.391314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.048 [2024-07-11 14:02:14.391673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.391848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.391861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.048 [2024-07-11 14:02:14.391868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.048 [2024-07-11 14:02:14.392027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.048 [2024-07-11 14:02:14.392145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.048 [2024-07-11 14:02:14.392155] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.048 [2024-07-11 14:02:14.392166] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.048 [2024-07-11 14:02:14.393964] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.048 [2024-07-11 14:02:14.403316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.048 [2024-07-11 14:02:14.403718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.403987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.404019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.048 [2024-07-11 14:02:14.404041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.048 [2024-07-11 14:02:14.404414] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.048 [2024-07-11 14:02:14.404525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.048 [2024-07-11 14:02:14.404534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.048 [2024-07-11 14:02:14.404540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.048 [2024-07-11 14:02:14.406425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.048 [2024-07-11 14:02:14.415142] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.048 [2024-07-11 14:02:14.415538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.415729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.415762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.048 [2024-07-11 14:02:14.415784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.048 [2024-07-11 14:02:14.416188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.048 [2024-07-11 14:02:14.416513] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.048 [2024-07-11 14:02:14.416523] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.048 [2024-07-11 14:02:14.416530] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.048 [2024-07-11 14:02:14.418219] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.048 [2024-07-11 14:02:14.427337] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.048 [2024-07-11 14:02:14.427722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.427869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.048 [2024-07-11 14:02:14.427879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.048 [2024-07-11 14:02:14.427887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.048 [2024-07-11 14:02:14.427973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.048 [2024-07-11 14:02:14.428090] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.048 [2024-07-11 14:02:14.428099] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.048 [2024-07-11 14:02:14.428105] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.048 [2024-07-11 14:02:14.429682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.048 [2024-07-11 14:02:14.439254] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.049 [2024-07-11 14:02:14.439551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.049 [2024-07-11 14:02:14.439724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.049 [2024-07-11 14:02:14.439735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.049 [2024-07-11 14:02:14.439743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.049 [2024-07-11 14:02:14.439875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.049 [2024-07-11 14:02:14.440038] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.049 [2024-07-11 14:02:14.440048] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.049 [2024-07-11 14:02:14.440054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.049 [2024-07-11 14:02:14.441752] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.049 [2024-07-11 14:02:14.451347] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.049 [2024-07-11 14:02:14.451700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.049 [2024-07-11 14:02:14.451912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.049 [2024-07-11 14:02:14.451924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.049 [2024-07-11 14:02:14.451932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.049 [2024-07-11 14:02:14.452038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.049 [2024-07-11 14:02:14.452146] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.049 [2024-07-11 14:02:14.452155] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.049 [2024-07-11 14:02:14.452167] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.049 [2024-07-11 14:02:14.454046] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.049 [2024-07-11 14:02:14.463455] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.049 [2024-07-11 14:02:14.463806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.049 [2024-07-11 14:02:14.464071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.049 [2024-07-11 14:02:14.464083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.049 [2024-07-11 14:02:14.464090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.049 [2024-07-11 14:02:14.464198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.049 [2024-07-11 14:02:14.464302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.049 [2024-07-11 14:02:14.464310] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.049 [2024-07-11 14:02:14.464317] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.049 [2024-07-11 14:02:14.466022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.049 [2024-07-11 14:02:14.475561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.049 [2024-07-11 14:02:14.476013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.049 [2024-07-11 14:02:14.476235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.049 [2024-07-11 14:02:14.476249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.049 [2024-07-11 14:02:14.476257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.049 [2024-07-11 14:02:14.476345] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.049 [2024-07-11 14:02:14.476462] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.049 [2024-07-11 14:02:14.476470] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.049 [2024-07-11 14:02:14.476477] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.049 [2024-07-11 14:02:14.478233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.049 [2024-07-11 14:02:14.487699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.049 [2024-07-11 14:02:14.488075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.049 [2024-07-11 14:02:14.488220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.049 [2024-07-11 14:02:14.488233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.049 [2024-07-11 14:02:14.488241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.049 [2024-07-11 14:02:14.488404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.049 [2024-07-11 14:02:14.488522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.049 [2024-07-11 14:02:14.488535] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.049 [2024-07-11 14:02:14.488542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.049 [2024-07-11 14:02:14.490465] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.049 [2024-07-11 14:02:14.499682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.049 [2024-07-11 14:02:14.500058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.049 [2024-07-11 14:02:14.500305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.049 [2024-07-11 14:02:14.500317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.049 [2024-07-11 14:02:14.500325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.049 [2024-07-11 14:02:14.500410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.049 [2024-07-11 14:02:14.500573] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.049 [2024-07-11 14:02:14.500583] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.049 [2024-07-11 14:02:14.500589] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.310 [2024-07-11 14:02:14.502465] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.310 [2024-07-11 14:02:14.511899] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.310 [2024-07-11 14:02:14.512232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.512500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.512512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.310 [2024-07-11 14:02:14.512520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.310 [2024-07-11 14:02:14.512638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.310 [2024-07-11 14:02:14.512773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.310 [2024-07-11 14:02:14.512783] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.310 [2024-07-11 14:02:14.512789] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.310 [2024-07-11 14:02:14.514598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.310 [2024-07-11 14:02:14.524089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.310 [2024-07-11 14:02:14.524276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.524542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.524555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.310 [2024-07-11 14:02:14.524562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.310 [2024-07-11 14:02:14.524663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.310 [2024-07-11 14:02:14.524751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.310 [2024-07-11 14:02:14.524760] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.310 [2024-07-11 14:02:14.524770] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.310 [2024-07-11 14:02:14.526588] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.310 [2024-07-11 14:02:14.536082] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.310 [2024-07-11 14:02:14.536448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.536693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.536705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.310 [2024-07-11 14:02:14.536712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.310 [2024-07-11 14:02:14.536784] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.310 [2024-07-11 14:02:14.536855] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.310 [2024-07-11 14:02:14.536864] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.310 [2024-07-11 14:02:14.536871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.310 [2024-07-11 14:02:14.538833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.310 [2024-07-11 14:02:14.548174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.310 [2024-07-11 14:02:14.548525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.548792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.548804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.310 [2024-07-11 14:02:14.548811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.310 [2024-07-11 14:02:14.548913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.310 [2024-07-11 14:02:14.549000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.310 [2024-07-11 14:02:14.549009] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.310 [2024-07-11 14:02:14.549015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.310 [2024-07-11 14:02:14.550787] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.310 [2024-07-11 14:02:14.560245] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.310 [2024-07-11 14:02:14.560659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.560899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.560911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.310 [2024-07-11 14:02:14.560918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.310 [2024-07-11 14:02:14.561066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.310 [2024-07-11 14:02:14.561204] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.310 [2024-07-11 14:02:14.561215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.310 [2024-07-11 14:02:14.561221] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.310 [2024-07-11 14:02:14.562978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.310 [2024-07-11 14:02:14.572144] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.310 [2024-07-11 14:02:14.572502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.572654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.572666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.310 [2024-07-11 14:02:14.572674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.310 [2024-07-11 14:02:14.572791] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.310 [2024-07-11 14:02:14.572939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.310 [2024-07-11 14:02:14.572949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.310 [2024-07-11 14:02:14.572955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.310 [2024-07-11 14:02:14.574697] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.310 [2024-07-11 14:02:14.584207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.310 [2024-07-11 14:02:14.584618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.584814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.310 [2024-07-11 14:02:14.584826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.310 [2024-07-11 14:02:14.584834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.310 [2024-07-11 14:02:14.584981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.310 [2024-07-11 14:02:14.585113] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.310 [2024-07-11 14:02:14.585123] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.310 [2024-07-11 14:02:14.585129] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.310 [2024-07-11 14:02:14.586980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.310 [2024-07-11 14:02:14.596172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.311 [2024-07-11 14:02:14.596548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.596814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.596826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.311 [2024-07-11 14:02:14.596834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.311 [2024-07-11 14:02:14.596950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.311 [2024-07-11 14:02:14.597067] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.311 [2024-07-11 14:02:14.597077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.311 [2024-07-11 14:02:14.597083] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.311 [2024-07-11 14:02:14.598899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.311 [2024-07-11 14:02:14.608163] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.311 [2024-07-11 14:02:14.608516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.608781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.608792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.311 [2024-07-11 14:02:14.608799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.311 [2024-07-11 14:02:14.608931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.311 [2024-07-11 14:02:14.609050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.311 [2024-07-11 14:02:14.609060] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.311 [2024-07-11 14:02:14.609066] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.311 [2024-07-11 14:02:14.610763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.311 [2024-07-11 14:02:14.620016] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.311 [2024-07-11 14:02:14.620444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.620637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.620649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.311 [2024-07-11 14:02:14.620656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.311 [2024-07-11 14:02:14.620757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.311 [2024-07-11 14:02:14.620920] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.311 [2024-07-11 14:02:14.620930] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.311 [2024-07-11 14:02:14.620936] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.311 [2024-07-11 14:02:14.622722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.311 [2024-07-11 14:02:14.631830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.311 [2024-07-11 14:02:14.632236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.632479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.632491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.311 [2024-07-11 14:02:14.632498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.311 [2024-07-11 14:02:14.632616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.311 [2024-07-11 14:02:14.632734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.311 [2024-07-11 14:02:14.632744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.311 [2024-07-11 14:02:14.632750] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.311 [2024-07-11 14:02:14.634464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.311 [2024-07-11 14:02:14.643934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.311 [2024-07-11 14:02:14.644373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.644619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.644632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.311 [2024-07-11 14:02:14.644641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.311 [2024-07-11 14:02:14.644727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.311 [2024-07-11 14:02:14.644845] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.311 [2024-07-11 14:02:14.644853] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.311 [2024-07-11 14:02:14.644860] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.311 [2024-07-11 14:02:14.646665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.311 [2024-07-11 14:02:14.655971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.311 [2024-07-11 14:02:14.656310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.656572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.656585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.311 [2024-07-11 14:02:14.656593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.311 [2024-07-11 14:02:14.656694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.311 [2024-07-11 14:02:14.656797] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.311 [2024-07-11 14:02:14.656806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.311 [2024-07-11 14:02:14.656812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.311 [2024-07-11 14:02:14.658630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.311 [2024-07-11 14:02:14.668018] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.311 [2024-07-11 14:02:14.668389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.668633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.668645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.311 [2024-07-11 14:02:14.668653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.311 [2024-07-11 14:02:14.668801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.311 [2024-07-11 14:02:14.668903] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.311 [2024-07-11 14:02:14.668913] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.311 [2024-07-11 14:02:14.668920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.311 [2024-07-11 14:02:14.670588] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.311 [2024-07-11 14:02:14.679938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.311 [2024-07-11 14:02:14.680311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.680523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.680538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.311 [2024-07-11 14:02:14.680546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.311 [2024-07-11 14:02:14.680663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.311 [2024-07-11 14:02:14.680813] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.311 [2024-07-11 14:02:14.680823] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.311 [2024-07-11 14:02:14.680829] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.311 [2024-07-11 14:02:14.682749] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.311 [2024-07-11 14:02:14.691935] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.311 [2024-07-11 14:02:14.692348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.692614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.692626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.311 [2024-07-11 14:02:14.692633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.311 [2024-07-11 14:02:14.692766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.311 [2024-07-11 14:02:14.692868] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.311 [2024-07-11 14:02:14.692878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.311 [2024-07-11 14:02:14.692885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.311 [2024-07-11 14:02:14.694580] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.311 [2024-07-11 14:02:14.703890] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.311 [2024-07-11 14:02:14.704287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.704557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.311 [2024-07-11 14:02:14.704569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.311 [2024-07-11 14:02:14.704577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.311 [2024-07-11 14:02:14.704709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.311 [2024-07-11 14:02:14.704841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.311 [2024-07-11 14:02:14.704851] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.311 [2024-07-11 14:02:14.704858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.311 [2024-07-11 14:02:14.706674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.311 [2024-07-11 14:02:14.715780] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.312 [2024-07-11 14:02:14.716135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.312 [2024-07-11 14:02:14.716421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.312 [2024-07-11 14:02:14.716433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.312 [2024-07-11 14:02:14.716445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.312 [2024-07-11 14:02:14.716566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.312 [2024-07-11 14:02:14.716719] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.312 [2024-07-11 14:02:14.716730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.312 [2024-07-11 14:02:14.716736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.312 [2024-07-11 14:02:14.718644] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.312 [2024-07-11 14:02:14.727866] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.312 [2024-07-11 14:02:14.728187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.312 [2024-07-11 14:02:14.728384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.312 [2024-07-11 14:02:14.728396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.312 [2024-07-11 14:02:14.728404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.312 [2024-07-11 14:02:14.728506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.312 [2024-07-11 14:02:14.728623] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.312 [2024-07-11 14:02:14.728632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.312 [2024-07-11 14:02:14.728640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.312 [2024-07-11 14:02:14.730446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.312 [2024-07-11 14:02:14.739862] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.312 [2024-07-11 14:02:14.740227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.312 [2024-07-11 14:02:14.740489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.312 [2024-07-11 14:02:14.740500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.312 [2024-07-11 14:02:14.740508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.312 [2024-07-11 14:02:14.740626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.312 [2024-07-11 14:02:14.740728] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.312 [2024-07-11 14:02:14.740737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.312 [2024-07-11 14:02:14.740744] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.312 [2024-07-11 14:02:14.742610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.312 [2024-07-11 14:02:14.752031] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.312 [2024-07-11 14:02:14.752326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.312 [2024-07-11 14:02:14.752545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.312 [2024-07-11 14:02:14.752557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.312 [2024-07-11 14:02:14.752565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.312 [2024-07-11 14:02:14.752700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.312 [2024-07-11 14:02:14.752818] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.312 [2024-07-11 14:02:14.752828] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.312 [2024-07-11 14:02:14.752834] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.312 [2024-07-11 14:02:14.754484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.312 [2024-07-11 14:02:14.764099] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.573 [2024-07-11 14:02:14.764527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.573 [2024-07-11 14:02:14.764725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.573 [2024-07-11 14:02:14.764736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.573 [2024-07-11 14:02:14.764744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.573 [2024-07-11 14:02:14.764830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.573 [2024-07-11 14:02:14.764932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.573 [2024-07-11 14:02:14.764941] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.573 [2024-07-11 14:02:14.764948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.573 [2024-07-11 14:02:14.766767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.573 [2024-07-11 14:02:14.776110] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.573 [2024-07-11 14:02:14.776557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.573 [2024-07-11 14:02:14.776686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.573 [2024-07-11 14:02:14.776698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.573 [2024-07-11 14:02:14.776705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.573 [2024-07-11 14:02:14.776822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.573 [2024-07-11 14:02:14.776955] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.573 [2024-07-11 14:02:14.776965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.573 [2024-07-11 14:02:14.776971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.573 [2024-07-11 14:02:14.778803] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.573 [2024-07-11 14:02:14.788047] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.573 [2024-07-11 14:02:14.788458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.573 [2024-07-11 14:02:14.788670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.573 [2024-07-11 14:02:14.788682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.573 [2024-07-11 14:02:14.788690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.573 [2024-07-11 14:02:14.788837] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.573 [2024-07-11 14:02:14.788972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.573 [2024-07-11 14:02:14.788982] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.573 [2024-07-11 14:02:14.788988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.573 [2024-07-11 14:02:14.790732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.573 [2024-07-11 14:02:14.800219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.573 [2024-07-11 14:02:14.800700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.573 [2024-07-11 14:02:14.800900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.573 [2024-07-11 14:02:14.800932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.573 [2024-07-11 14:02:14.800954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.573 [2024-07-11 14:02:14.801235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.573 [2024-07-11 14:02:14.801340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.573 [2024-07-11 14:02:14.801350] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.573 [2024-07-11 14:02:14.801356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.573 [2024-07-11 14:02:14.803138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.573 [2024-07-11 14:02:14.812213] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.573 [2024-07-11 14:02:14.812598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.573 [2024-07-11 14:02:14.812910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.573 [2024-07-11 14:02:14.812942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.573 [2024-07-11 14:02:14.812964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.573 [2024-07-11 14:02:14.813181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.573 [2024-07-11 14:02:14.813285] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.573 [2024-07-11 14:02:14.813295] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.573 [2024-07-11 14:02:14.813302] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.573 [2024-07-11 14:02:14.815044] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.573 [2024-07-11 14:02:14.824100] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.573 [2024-07-11 14:02:14.824514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.573 [2024-07-11 14:02:14.824678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.573 [2024-07-11 14:02:14.824709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.573 [2024-07-11 14:02:14.824731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.574 [2024-07-11 14:02:14.825094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.574 [2024-07-11 14:02:14.825210] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.574 [2024-07-11 14:02:14.825223] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.574 [2024-07-11 14:02:14.825230] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.574 [2024-07-11 14:02:14.827009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.574 [2024-07-11 14:02:14.835852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.574 [2024-07-11 14:02:14.836206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.836440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.836472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.574 [2024-07-11 14:02:14.836494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.574 [2024-07-11 14:02:14.836874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.574 [2024-07-11 14:02:14.837349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.574 [2024-07-11 14:02:14.837360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.574 [2024-07-11 14:02:14.837366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.574 [2024-07-11 14:02:14.838959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.574 [2024-07-11 14:02:14.847663] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.574 [2024-07-11 14:02:14.848092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.848421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.848454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.574 [2024-07-11 14:02:14.848476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.574 [2024-07-11 14:02:14.848604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.574 [2024-07-11 14:02:14.848719] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.574 [2024-07-11 14:02:14.848729] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.574 [2024-07-11 14:02:14.848735] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.574 [2024-07-11 14:02:14.850453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.574 [2024-07-11 14:02:14.859552] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.574 [2024-07-11 14:02:14.859936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.860245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.860279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.574 [2024-07-11 14:02:14.860302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.574 [2024-07-11 14:02:14.860518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.574 [2024-07-11 14:02:14.860627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.574 [2024-07-11 14:02:14.860636] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.574 [2024-07-11 14:02:14.860648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.574 [2024-07-11 14:02:14.862231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.574 [2024-07-11 14:02:14.871592] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.574 [2024-07-11 14:02:14.871974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.872265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.872298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.574 [2024-07-11 14:02:14.872321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.574 [2024-07-11 14:02:14.872575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.574 [2024-07-11 14:02:14.872684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.574 [2024-07-11 14:02:14.872693] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.574 [2024-07-11 14:02:14.872699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.574 [2024-07-11 14:02:14.874404] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.574 [2024-07-11 14:02:14.883391] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.574 [2024-07-11 14:02:14.883816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.884067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.884099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.574 [2024-07-11 14:02:14.884121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.574 [2024-07-11 14:02:14.884509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.574 [2024-07-11 14:02:14.884625] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.574 [2024-07-11 14:02:14.884635] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.574 [2024-07-11 14:02:14.884641] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.574 [2024-07-11 14:02:14.886934] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.574 [2024-07-11 14:02:14.895732] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.574 [2024-07-11 14:02:14.896085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.896281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.896293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.574 [2024-07-11 14:02:14.896300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.574 [2024-07-11 14:02:14.896417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.574 [2024-07-11 14:02:14.896520] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.574 [2024-07-11 14:02:14.896528] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.574 [2024-07-11 14:02:14.896535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.574 [2024-07-11 14:02:14.898417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.574 [2024-07-11 14:02:14.907821] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.574 [2024-07-11 14:02:14.908241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.908487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.908499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.574 [2024-07-11 14:02:14.908506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.574 [2024-07-11 14:02:14.908634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.574 [2024-07-11 14:02:14.908719] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.574 [2024-07-11 14:02:14.908728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.574 [2024-07-11 14:02:14.908735] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.574 [2024-07-11 14:02:14.910575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.574 [2024-07-11 14:02:14.919753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.574 [2024-07-11 14:02:14.920145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.920436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.920470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.574 [2024-07-11 14:02:14.920492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.574 [2024-07-11 14:02:14.920922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.574 [2024-07-11 14:02:14.921321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.574 [2024-07-11 14:02:14.921349] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.574 [2024-07-11 14:02:14.921370] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.574 [2024-07-11 14:02:14.923195] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.574 [2024-07-11 14:02:14.931482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.574 [2024-07-11 14:02:14.931899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.932183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.932216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.574 [2024-07-11 14:02:14.932239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.574 [2024-07-11 14:02:14.932601] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.574 [2024-07-11 14:02:14.932701] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.574 [2024-07-11 14:02:14.932711] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.574 [2024-07-11 14:02:14.932717] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.574 [2024-07-11 14:02:14.934377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.574 [2024-07-11 14:02:14.943298] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.574 [2024-07-11 14:02:14.943721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.944007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.574 [2024-07-11 14:02:14.944040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.575 [2024-07-11 14:02:14.944062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.575 [2024-07-11 14:02:14.944560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.575 [2024-07-11 14:02:14.944691] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.575 [2024-07-11 14:02:14.944700] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.575 [2024-07-11 14:02:14.944706] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.575 [2024-07-11 14:02:14.946467] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.575 [2024-07-11 14:02:14.955264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.575 [2024-07-11 14:02:14.955655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.575 [2024-07-11 14:02:14.955860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.575 [2024-07-11 14:02:14.955872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.575 [2024-07-11 14:02:14.955879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.575 [2024-07-11 14:02:14.955973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.575 [2024-07-11 14:02:14.956081] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.575 [2024-07-11 14:02:14.956089] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.575 [2024-07-11 14:02:14.956095] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.575 [2024-07-11 14:02:14.957924] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.575 [2024-07-11 14:02:14.967190] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.575 [2024-07-11 14:02:14.967542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.575 [2024-07-11 14:02:14.967723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.575 [2024-07-11 14:02:14.967735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.575 [2024-07-11 14:02:14.967742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.575 [2024-07-11 14:02:14.967871] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.575 [2024-07-11 14:02:14.967971] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.575 [2024-07-11 14:02:14.967979] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.575 [2024-07-11 14:02:14.967987] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.575 [2024-07-11 14:02:14.969712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.575 [2024-07-11 14:02:14.978979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.575 [2024-07-11 14:02:14.979360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.575 [2024-07-11 14:02:14.979605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.575 [2024-07-11 14:02:14.979617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.575 [2024-07-11 14:02:14.979625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.575 [2024-07-11 14:02:14.979733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.575 [2024-07-11 14:02:14.979813] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.575 [2024-07-11 14:02:14.979821] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.575 [2024-07-11 14:02:14.979828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.575 [2024-07-11 14:02:14.981464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.575 [2024-07-11 14:02:14.990893] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.575 [2024-07-11 14:02:14.991208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.575 [2024-07-11 14:02:14.991524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.575 [2024-07-11 14:02:14.991556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.575 [2024-07-11 14:02:14.991579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.575 [2024-07-11 14:02:14.991967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.575 [2024-07-11 14:02:14.992105] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.575 [2024-07-11 14:02:14.992115] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.575 [2024-07-11 14:02:14.992121] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.575 [2024-07-11 14:02:14.993924] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.575 [2024-07-11 14:02:15.002831] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.575 [2024-07-11 14:02:15.003224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.575 [2024-07-11 14:02:15.003495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.575 [2024-07-11 14:02:15.003527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.575 [2024-07-11 14:02:15.003549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.575 [2024-07-11 14:02:15.003930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.575 [2024-07-11 14:02:15.004271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.575 [2024-07-11 14:02:15.004298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.575 [2024-07-11 14:02:15.004319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.575 [2024-07-11 14:02:15.006649] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.575 [2024-07-11 14:02:15.014736] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.575 [2024-07-11 14:02:15.015156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.575 [2024-07-11 14:02:15.015434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.575 [2024-07-11 14:02:15.015475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.575 [2024-07-11 14:02:15.015498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.575 [2024-07-11 14:02:15.015826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.575 [2024-07-11 14:02:15.015908] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.575 [2024-07-11 14:02:15.015917] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.575 [2024-07-11 14:02:15.015924] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.575 [2024-07-11 14:02:15.018169] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.836 [2024-07-11 14:02:15.027387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.836 [2024-07-11 14:02:15.027718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.836 [2024-07-11 14:02:15.027954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.836 [2024-07-11 14:02:15.027966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.836 [2024-07-11 14:02:15.027973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.836 [2024-07-11 14:02:15.028088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.836 [2024-07-11 14:02:15.028209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.836 [2024-07-11 14:02:15.028220] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.836 [2024-07-11 14:02:15.028227] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.836 [2024-07-11 14:02:15.030049] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.836 [2024-07-11 14:02:15.039170] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.836 [2024-07-11 14:02:15.039628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.836 [2024-07-11 14:02:15.039875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.836 [2024-07-11 14:02:15.039907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.836 [2024-07-11 14:02:15.039929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.836 [2024-07-11 14:02:15.040273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.836 [2024-07-11 14:02:15.040608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.836 [2024-07-11 14:02:15.040633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.836 [2024-07-11 14:02:15.040653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.836 [2024-07-11 14:02:15.042527] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.836 [2024-07-11 14:02:15.051223] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.836 [2024-07-11 14:02:15.051620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.836 [2024-07-11 14:02:15.051912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.836 [2024-07-11 14:02:15.051944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.836 [2024-07-11 14:02:15.051974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.837 [2024-07-11 14:02:15.052468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.837 [2024-07-11 14:02:15.052802] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.837 [2024-07-11 14:02:15.052828] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.837 [2024-07-11 14:02:15.052848] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.837 [2024-07-11 14:02:15.054698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.837 [2024-07-11 14:02:15.063015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.837 [2024-07-11 14:02:15.063325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.063565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.063576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.837 [2024-07-11 14:02:15.063584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.837 [2024-07-11 14:02:15.063720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.837 [2024-07-11 14:02:15.063800] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.837 [2024-07-11 14:02:15.063808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.837 [2024-07-11 14:02:15.063815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.837 [2024-07-11 14:02:15.065565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.837 [2024-07-11 14:02:15.074989] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.837 [2024-07-11 14:02:15.075371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.075607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.075640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.837 [2024-07-11 14:02:15.075662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.837 [2024-07-11 14:02:15.075786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.837 [2024-07-11 14:02:15.075881] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.837 [2024-07-11 14:02:15.075889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.837 [2024-07-11 14:02:15.075895] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.837 [2024-07-11 14:02:15.077602] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.837 [2024-07-11 14:02:15.086880] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.837 [2024-07-11 14:02:15.087281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.087531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.087563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.837 [2024-07-11 14:02:15.087585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.837 [2024-07-11 14:02:15.087973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.837 [2024-07-11 14:02:15.088128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.837 [2024-07-11 14:02:15.088138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.837 [2024-07-11 14:02:15.088144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.837 [2024-07-11 14:02:15.090029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.837 [2024-07-11 14:02:15.098653] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.837 [2024-07-11 14:02:15.099034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.099302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.099336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.837 [2024-07-11 14:02:15.099358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.837 [2024-07-11 14:02:15.099713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.837 [2024-07-11 14:02:15.099808] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.837 [2024-07-11 14:02:15.099817] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.837 [2024-07-11 14:02:15.099823] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.837 [2024-07-11 14:02:15.101557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.837 [2024-07-11 14:02:15.110525] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.837 [2024-07-11 14:02:15.110919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.111061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.111092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.837 [2024-07-11 14:02:15.111115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.837 [2024-07-11 14:02:15.111555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.837 [2024-07-11 14:02:15.111942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.837 [2024-07-11 14:02:15.111967] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.837 [2024-07-11 14:02:15.111988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.837 [2024-07-11 14:02:15.113656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.837 [2024-07-11 14:02:15.122475] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.837 [2024-07-11 14:02:15.122768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.123051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.123083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.837 [2024-07-11 14:02:15.123106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.837 [2024-07-11 14:02:15.123552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.837 [2024-07-11 14:02:15.123877] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.837 [2024-07-11 14:02:15.123887] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.837 [2024-07-11 14:02:15.123893] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.837 [2024-07-11 14:02:15.125461] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.837 [2024-07-11 14:02:15.134345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.837 [2024-07-11 14:02:15.134759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.135045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.135076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.837 [2024-07-11 14:02:15.135100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.837 [2024-07-11 14:02:15.135230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.837 [2024-07-11 14:02:15.135330] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.837 [2024-07-11 14:02:15.135340] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.837 [2024-07-11 14:02:15.135346] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.837 [2024-07-11 14:02:15.137134] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.837 [2024-07-11 14:02:15.146248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.837 [2024-07-11 14:02:15.146665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.146932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.146943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.837 [2024-07-11 14:02:15.146950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.837 [2024-07-11 14:02:15.147034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.837 [2024-07-11 14:02:15.147169] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.837 [2024-07-11 14:02:15.147178] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.837 [2024-07-11 14:02:15.147184] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.837 [2024-07-11 14:02:15.149074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.837 [2024-07-11 14:02:15.158191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.837 [2024-07-11 14:02:15.158567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.158703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.158716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.837 [2024-07-11 14:02:15.158723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.837 [2024-07-11 14:02:15.158837] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.837 [2024-07-11 14:02:15.158936] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.837 [2024-07-11 14:02:15.158947] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.837 [2024-07-11 14:02:15.158955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.837 [2024-07-11 14:02:15.160676] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.837 [2024-07-11 14:02:15.170125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.837 [2024-07-11 14:02:15.170530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.170796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.837 [2024-07-11 14:02:15.170829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.837 [2024-07-11 14:02:15.170851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.837 [2024-07-11 14:02:15.171278] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.838 [2024-07-11 14:02:15.171349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.838 [2024-07-11 14:02:15.171358] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.838 [2024-07-11 14:02:15.171365] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.838 [2024-07-11 14:02:15.173055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.838 [2024-07-11 14:02:15.181910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.838 [2024-07-11 14:02:15.182215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.182400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.182432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.838 [2024-07-11 14:02:15.182454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.838 [2024-07-11 14:02:15.182884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.838 [2024-07-11 14:02:15.183283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.838 [2024-07-11 14:02:15.183310] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.838 [2024-07-11 14:02:15.183331] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.838 [2024-07-11 14:02:15.185129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.838 [2024-07-11 14:02:15.193783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.838 [2024-07-11 14:02:15.194073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.194339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.194352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.838 [2024-07-11 14:02:15.194359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.838 [2024-07-11 14:02:15.194469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.838 [2024-07-11 14:02:15.194578] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.838 [2024-07-11 14:02:15.194586] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.838 [2024-07-11 14:02:15.194595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.838 [2024-07-11 14:02:15.196301] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.838 [2024-07-11 14:02:15.205458] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.838 [2024-07-11 14:02:15.205770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.206035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.206067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.838 [2024-07-11 14:02:15.206090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.838 [2024-07-11 14:02:15.206350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.838 [2024-07-11 14:02:15.206467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.838 [2024-07-11 14:02:15.206476] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.838 [2024-07-11 14:02:15.206482] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.838 [2024-07-11 14:02:15.208250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.838 [2024-07-11 14:02:15.217351] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.838 [2024-07-11 14:02:15.217624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.217895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.217907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.838 [2024-07-11 14:02:15.217913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.838 [2024-07-11 14:02:15.218049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.838 [2024-07-11 14:02:15.218157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.838 [2024-07-11 14:02:15.218173] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.838 [2024-07-11 14:02:15.218179] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.838 [2024-07-11 14:02:15.219823] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.838 [2024-07-11 14:02:15.229019] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.838 [2024-07-11 14:02:15.229433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.229668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.229700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.838 [2024-07-11 14:02:15.229723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.838 [2024-07-11 14:02:15.230102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.838 [2024-07-11 14:02:15.230446] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.838 [2024-07-11 14:02:15.230473] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.838 [2024-07-11 14:02:15.230495] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.838 [2024-07-11 14:02:15.232170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.838 [2024-07-11 14:02:15.240811] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.838 [2024-07-11 14:02:15.241196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.241506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.241538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.838 [2024-07-11 14:02:15.241561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.838 [2024-07-11 14:02:15.241940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.838 [2024-07-11 14:02:15.242148] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.838 [2024-07-11 14:02:15.242158] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.838 [2024-07-11 14:02:15.242170] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.838 [2024-07-11 14:02:15.243858] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.838 [2024-07-11 14:02:15.252675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.838 [2024-07-11 14:02:15.253035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.253278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.253311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.838 [2024-07-11 14:02:15.253334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.838 [2024-07-11 14:02:15.253664] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.838 [2024-07-11 14:02:15.254096] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.838 [2024-07-11 14:02:15.254122] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.838 [2024-07-11 14:02:15.254143] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.838 [2024-07-11 14:02:15.255996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.838 [2024-07-11 14:02:15.264537] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.838 [2024-07-11 14:02:15.264940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.265181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.265214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.838 [2024-07-11 14:02:15.265236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.838 [2024-07-11 14:02:15.265519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.838 [2024-07-11 14:02:15.265669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.838 [2024-07-11 14:02:15.265679] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.838 [2024-07-11 14:02:15.265685] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.838 [2024-07-11 14:02:15.267475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.838 [2024-07-11 14:02:15.276344] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.838 [2024-07-11 14:02:15.276774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.277048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.277080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.838 [2024-07-11 14:02:15.277104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.838 [2024-07-11 14:02:15.277508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.838 [2024-07-11 14:02:15.277632] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.838 [2024-07-11 14:02:15.277641] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.838 [2024-07-11 14:02:15.277648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:12.838 [2024-07-11 14:02:15.279547] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.838 [2024-07-11 14:02:15.288114] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:12.838 [2024-07-11 14:02:15.288412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.288699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.838 [2024-07-11 14:02:15.288731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:12.838 [2024-07-11 14:02:15.288753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:12.838 [2024-07-11 14:02:15.289033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:12.839 [2024-07-11 14:02:15.289537] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:12.839 [2024-07-11 14:02:15.289548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:12.839 [2024-07-11 14:02:15.289554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.100 [2024-07-11 14:02:15.291477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.100 [2024-07-11 14:02:15.300149] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.100 [2024-07-11 14:02:15.300436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.300658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.300690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.100 [2024-07-11 14:02:15.300713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.100 [2024-07-11 14:02:15.300993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.100 [2024-07-11 14:02:15.301310] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.100 [2024-07-11 14:02:15.301320] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.100 [2024-07-11 14:02:15.301326] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.100 [2024-07-11 14:02:15.303016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.100 [2024-07-11 14:02:15.311958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.100 [2024-07-11 14:02:15.312350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.312545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.312577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.100 [2024-07-11 14:02:15.312599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.100 [2024-07-11 14:02:15.312928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.100 [2024-07-11 14:02:15.313323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.100 [2024-07-11 14:02:15.313351] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.100 [2024-07-11 14:02:15.313371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.100 [2024-07-11 14:02:15.315017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.100 [2024-07-11 14:02:15.323660] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.100 [2024-07-11 14:02:15.324056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.324298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.324312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.100 [2024-07-11 14:02:15.324319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.100 [2024-07-11 14:02:15.324442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.100 [2024-07-11 14:02:15.324551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.100 [2024-07-11 14:02:15.324560] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.100 [2024-07-11 14:02:15.324566] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.100 [2024-07-11 14:02:15.326165] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.100 [2024-07-11 14:02:15.335359] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.100 [2024-07-11 14:02:15.335749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.335994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.336026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.100 [2024-07-11 14:02:15.336049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.100 [2024-07-11 14:02:15.336494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.100 [2024-07-11 14:02:15.336729] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.100 [2024-07-11 14:02:15.336755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.100 [2024-07-11 14:02:15.336776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.100 [2024-07-11 14:02:15.338542] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.100 [2024-07-11 14:02:15.347225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.100 [2024-07-11 14:02:15.347612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.347922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.347960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.100 [2024-07-11 14:02:15.347983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.100 [2024-07-11 14:02:15.348477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.100 [2024-07-11 14:02:15.348739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.100 [2024-07-11 14:02:15.348749] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.100 [2024-07-11 14:02:15.348754] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.100 [2024-07-11 14:02:15.350473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.100 [2024-07-11 14:02:15.359155] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.100 [2024-07-11 14:02:15.359563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.359822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.359853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.100 [2024-07-11 14:02:15.359875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.100 [2024-07-11 14:02:15.360271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.100 [2024-07-11 14:02:15.360531] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.100 [2024-07-11 14:02:15.360540] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.100 [2024-07-11 14:02:15.360546] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.100 [2024-07-11 14:02:15.362127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.100 [2024-07-11 14:02:15.371328] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.100 [2024-07-11 14:02:15.371678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.371918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.371929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.100 [2024-07-11 14:02:15.371961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.100 [2024-07-11 14:02:15.372306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.100 [2024-07-11 14:02:15.372587] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.100 [2024-07-11 14:02:15.372596] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.100 [2024-07-11 14:02:15.372603] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.100 [2024-07-11 14:02:15.374279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.100 [2024-07-11 14:02:15.383138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.100 [2024-07-11 14:02:15.383533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.383835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.383866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.100 [2024-07-11 14:02:15.383895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.100 [2024-07-11 14:02:15.384291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.100 [2024-07-11 14:02:15.384480] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.100 [2024-07-11 14:02:15.384490] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.100 [2024-07-11 14:02:15.384496] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.100 [2024-07-11 14:02:15.386390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.100 [2024-07-11 14:02:15.395116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.100 [2024-07-11 14:02:15.395428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.395670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.100 [2024-07-11 14:02:15.395702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.100 [2024-07-11 14:02:15.395724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.101 [2024-07-11 14:02:15.396105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.101 [2024-07-11 14:02:15.396500] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.101 [2024-07-11 14:02:15.396527] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.101 [2024-07-11 14:02:15.396548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.101 [2024-07-11 14:02:15.398575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.101 [2024-07-11 14:02:15.406994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.101 [2024-07-11 14:02:15.407446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.407757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.407788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.101 [2024-07-11 14:02:15.407824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.101 [2024-07-11 14:02:15.407939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.101 [2024-07-11 14:02:15.408025] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.101 [2024-07-11 14:02:15.408035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.101 [2024-07-11 14:02:15.408041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.101 [2024-07-11 14:02:15.409704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.101 [2024-07-11 14:02:15.419086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.101 [2024-07-11 14:02:15.419425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.419657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.419689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.101 [2024-07-11 14:02:15.419711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.101 [2024-07-11 14:02:15.420106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.101 [2024-07-11 14:02:15.420453] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.101 [2024-07-11 14:02:15.420480] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.101 [2024-07-11 14:02:15.420501] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.101 [2024-07-11 14:02:15.422265] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.101 [2024-07-11 14:02:15.430852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.101 [2024-07-11 14:02:15.431268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.431462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.431473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.101 [2024-07-11 14:02:15.431480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.101 [2024-07-11 14:02:15.431574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.101 [2024-07-11 14:02:15.431696] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.101 [2024-07-11 14:02:15.431704] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.101 [2024-07-11 14:02:15.431710] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.101 [2024-07-11 14:02:15.433389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.101 [2024-07-11 14:02:15.442662] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.101 [2024-07-11 14:02:15.443047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.443283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.443317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.101 [2024-07-11 14:02:15.443339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.101 [2024-07-11 14:02:15.443610] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.101 [2024-07-11 14:02:15.443761] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.101 [2024-07-11 14:02:15.443770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.101 [2024-07-11 14:02:15.443776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.101 [2024-07-11 14:02:15.445484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.101 [2024-07-11 14:02:15.454430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.101 [2024-07-11 14:02:15.454768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.454990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.455022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.101 [2024-07-11 14:02:15.455044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.101 [2024-07-11 14:02:15.455341] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.101 [2024-07-11 14:02:15.455781] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.101 [2024-07-11 14:02:15.455807] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.101 [2024-07-11 14:02:15.455828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.101 [2024-07-11 14:02:15.457685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.101 [2024-07-11 14:02:15.466288] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.101 [2024-07-11 14:02:15.466726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.467065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.467097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.101 [2024-07-11 14:02:15.467119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.101 [2024-07-11 14:02:15.467566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.101 [2024-07-11 14:02:15.467701] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.101 [2024-07-11 14:02:15.467711] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.101 [2024-07-11 14:02:15.467717] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.101 [2024-07-11 14:02:15.469464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.101 [2024-07-11 14:02:15.478231] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.101 [2024-07-11 14:02:15.478625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.478812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.478824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.101 [2024-07-11 14:02:15.478831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.101 [2024-07-11 14:02:15.478924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.101 [2024-07-11 14:02:15.479046] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.101 [2024-07-11 14:02:15.479054] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.101 [2024-07-11 14:02:15.479061] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.101 [2024-07-11 14:02:15.480687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.101 [2024-07-11 14:02:15.489951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.101 [2024-07-11 14:02:15.490390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.490609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.490640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.101 [2024-07-11 14:02:15.490662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.101 [2024-07-11 14:02:15.490962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.101 [2024-07-11 14:02:15.491182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.101 [2024-07-11 14:02:15.491199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.101 [2024-07-11 14:02:15.491209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.101 [2024-07-11 14:02:15.493785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.101 [2024-07-11 14:02:15.502312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.101 [2024-07-11 14:02:15.502693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.503027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.503058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.101 [2024-07-11 14:02:15.503086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.101 [2024-07-11 14:02:15.503153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.101 [2024-07-11 14:02:15.503272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.101 [2024-07-11 14:02:15.503280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.101 [2024-07-11 14:02:15.503287] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.101 [2024-07-11 14:02:15.505094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.101 [2024-07-11 14:02:15.514201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.101 [2024-07-11 14:02:15.514562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.514832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.101 [2024-07-11 14:02:15.514864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.101 [2024-07-11 14:02:15.514887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.102 [2024-07-11 14:02:15.515233] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.102 [2024-07-11 14:02:15.515567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.102 [2024-07-11 14:02:15.515593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.102 [2024-07-11 14:02:15.515615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.102 [2024-07-11 14:02:15.517561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.102 [2024-07-11 14:02:15.525910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.102 [2024-07-11 14:02:15.526325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.102 [2024-07-11 14:02:15.526557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.102 [2024-07-11 14:02:15.526588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.102 [2024-07-11 14:02:15.526610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.102 [2024-07-11 14:02:15.526940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.102 [2024-07-11 14:02:15.527147] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.102 [2024-07-11 14:02:15.527157] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.102 [2024-07-11 14:02:15.527173] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.102 [2024-07-11 14:02:15.528753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.102 [2024-07-11 14:02:15.537783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.102 [2024-07-11 14:02:15.538148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.102 [2024-07-11 14:02:15.538453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.102 [2024-07-11 14:02:15.538484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.102 [2024-07-11 14:02:15.538507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.102 [2024-07-11 14:02:15.538985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.102 [2024-07-11 14:02:15.539080] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.102 [2024-07-11 14:02:15.539089] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.102 [2024-07-11 14:02:15.539095] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.102 [2024-07-11 14:02:15.540612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.102 [2024-07-11 14:02:15.549697] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.102 [2024-07-11 14:02:15.550033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.102 [2024-07-11 14:02:15.550277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.102 [2024-07-11 14:02:15.550306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.102 [2024-07-11 14:02:15.550329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.102 [2024-07-11 14:02:15.550660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.102 [2024-07-11 14:02:15.550927] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.102 [2024-07-11 14:02:15.550937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.102 [2024-07-11 14:02:15.550943] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.102 [2024-07-11 14:02:15.552548] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.363 [2024-07-11 14:02:15.561550] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.363 [2024-07-11 14:02:15.561975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.562261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.562295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.363 [2024-07-11 14:02:15.562318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.363 [2024-07-11 14:02:15.562648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.363 [2024-07-11 14:02:15.562943] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.363 [2024-07-11 14:02:15.562953] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.363 [2024-07-11 14:02:15.562959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.363 [2024-07-11 14:02:15.564640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.363 [2024-07-11 14:02:15.573462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.363 [2024-07-11 14:02:15.573862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.574093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.574125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.363 [2024-07-11 14:02:15.574149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.363 [2024-07-11 14:02:15.574498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.363 [2024-07-11 14:02:15.574831] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.363 [2024-07-11 14:02:15.574856] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.363 [2024-07-11 14:02:15.574878] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.363 [2024-07-11 14:02:15.576516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.363 [2024-07-11 14:02:15.585397] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.363 [2024-07-11 14:02:15.585777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.586023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.586054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.363 [2024-07-11 14:02:15.586076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.363 [2024-07-11 14:02:15.586321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.363 [2024-07-11 14:02:15.586445] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.363 [2024-07-11 14:02:15.586455] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.363 [2024-07-11 14:02:15.586460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.363 [2024-07-11 14:02:15.588166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.363 [2024-07-11 14:02:15.597182] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.363 [2024-07-11 14:02:15.597583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.597911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.597943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.363 [2024-07-11 14:02:15.597965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.363 [2024-07-11 14:02:15.598311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.363 [2024-07-11 14:02:15.598694] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.363 [2024-07-11 14:02:15.598720] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.363 [2024-07-11 14:02:15.598741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.363 [2024-07-11 14:02:15.600685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.363 [2024-07-11 14:02:15.608925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.363 [2024-07-11 14:02:15.609337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.609654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.609686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.363 [2024-07-11 14:02:15.609708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.363 [2024-07-11 14:02:15.609986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.363 [2024-07-11 14:02:15.610313] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.363 [2024-07-11 14:02:15.610323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.363 [2024-07-11 14:02:15.610329] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.363 [2024-07-11 14:02:15.611977] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.363 [2024-07-11 14:02:15.620905] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.363 [2024-07-11 14:02:15.621337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.621615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.621647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.363 [2024-07-11 14:02:15.621669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.363 [2024-07-11 14:02:15.622099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.363 [2024-07-11 14:02:15.622326] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.363 [2024-07-11 14:02:15.622336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.363 [2024-07-11 14:02:15.622342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.363 [2024-07-11 14:02:15.624779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.363 [2024-07-11 14:02:15.633328] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.363 [2024-07-11 14:02:15.633725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.634040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.634073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.363 [2024-07-11 14:02:15.634095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.363 [2024-07-11 14:02:15.634394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.363 [2024-07-11 14:02:15.634492] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.363 [2024-07-11 14:02:15.634502] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.363 [2024-07-11 14:02:15.634509] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.363 [2024-07-11 14:02:15.636177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.363 [2024-07-11 14:02:15.645082] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.363 [2024-07-11 14:02:15.645493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.645817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.363 [2024-07-11 14:02:15.645848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.364 [2024-07-11 14:02:15.645870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.364 [2024-07-11 14:02:15.646058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.364 [2024-07-11 14:02:15.646139] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.364 [2024-07-11 14:02:15.646148] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.364 [2024-07-11 14:02:15.646155] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.364 [2024-07-11 14:02:15.647795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.364 [2024-07-11 14:02:15.657064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.364 [2024-07-11 14:02:15.657482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.657765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.657797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.364 [2024-07-11 14:02:15.657820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.364 [2024-07-11 14:02:15.658213] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.364 [2024-07-11 14:02:15.658421] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.364 [2024-07-11 14:02:15.658430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.364 [2024-07-11 14:02:15.658437] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.364 [2024-07-11 14:02:15.660078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.364 [2024-07-11 14:02:15.668828] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.364 [2024-07-11 14:02:15.669237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.669489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.669521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.364 [2024-07-11 14:02:15.669544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.364 [2024-07-11 14:02:15.670023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.364 [2024-07-11 14:02:15.670185] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.364 [2024-07-11 14:02:15.670195] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.364 [2024-07-11 14:02:15.670202] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.364 [2024-07-11 14:02:15.671847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.364 [2024-07-11 14:02:15.680622] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.364 [2024-07-11 14:02:15.680983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.681239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.681281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.364 [2024-07-11 14:02:15.681304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.364 [2024-07-11 14:02:15.681506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.364 [2024-07-11 14:02:15.681644] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.364 [2024-07-11 14:02:15.681653] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.364 [2024-07-11 14:02:15.681659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.364 [2024-07-11 14:02:15.683326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.364 [2024-07-11 14:02:15.692385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.364 [2024-07-11 14:02:15.692804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.693072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.693104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.364 [2024-07-11 14:02:15.693126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.364 [2024-07-11 14:02:15.693337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.364 [2024-07-11 14:02:15.693406] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.364 [2024-07-11 14:02:15.693414] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.364 [2024-07-11 14:02:15.693420] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.364 [2024-07-11 14:02:15.694876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.364 [2024-07-11 14:02:15.704235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.364 [2024-07-11 14:02:15.704629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.704981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.705013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.364 [2024-07-11 14:02:15.705035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.364 [2024-07-11 14:02:15.705231] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.364 [2024-07-11 14:02:15.705342] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.364 [2024-07-11 14:02:15.705351] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.364 [2024-07-11 14:02:15.705357] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.364 [2024-07-11 14:02:15.706968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.364 [2024-07-11 14:02:15.716153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.364 [2024-07-11 14:02:15.716605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.716925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.716958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.364 [2024-07-11 14:02:15.716988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.364 [2024-07-11 14:02:15.717433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.364 [2024-07-11 14:02:15.717598] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.364 [2024-07-11 14:02:15.717607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.364 [2024-07-11 14:02:15.717613] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.364 [2024-07-11 14:02:15.719225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.364 [2024-07-11 14:02:15.728054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.364 [2024-07-11 14:02:15.728428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.728711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.728743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.364 [2024-07-11 14:02:15.728766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.364 [2024-07-11 14:02:15.729215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.364 [2024-07-11 14:02:15.729432] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.364 [2024-07-11 14:02:15.729442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.364 [2024-07-11 14:02:15.729448] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.364 [2024-07-11 14:02:15.731137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.364 [2024-07-11 14:02:15.740150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.364 [2024-07-11 14:02:15.740594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.740910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.740943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.364 [2024-07-11 14:02:15.740965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.364 [2024-07-11 14:02:15.741260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.364 [2024-07-11 14:02:15.741709] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.364 [2024-07-11 14:02:15.741718] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.364 [2024-07-11 14:02:15.741724] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.364 [2024-07-11 14:02:15.743416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.364 [2024-07-11 14:02:15.752042] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.364 [2024-07-11 14:02:15.752461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.752712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.364 [2024-07-11 14:02:15.752744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.364 [2024-07-11 14:02:15.752767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.364 [2024-07-11 14:02:15.753104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.364 [2024-07-11 14:02:15.753412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.364 [2024-07-11 14:02:15.753425] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.364 [2024-07-11 14:02:15.753436] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.364 [2024-07-11 14:02:15.756234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.364 [2024-07-11 14:02:15.764277] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.364 [2024-07-11 14:02:15.764671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.365 [2024-07-11 14:02:15.764918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.365 [2024-07-11 14:02:15.764950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.365 [2024-07-11 14:02:15.764972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.365 [2024-07-11 14:02:15.765317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.365 [2024-07-11 14:02:15.765589] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.365 [2024-07-11 14:02:15.765598] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.365 [2024-07-11 14:02:15.765605] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.365 [2024-07-11 14:02:15.767144] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.365 [2024-07-11 14:02:15.776103] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.365 [2024-07-11 14:02:15.776509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.365 [2024-07-11 14:02:15.776746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.365 [2024-07-11 14:02:15.776778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.365 [2024-07-11 14:02:15.776801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.365 [2024-07-11 14:02:15.777294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.365 [2024-07-11 14:02:15.777524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.365 [2024-07-11 14:02:15.777533] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.365 [2024-07-11 14:02:15.777540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.365 [2024-07-11 14:02:15.779093] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.365 [2024-07-11 14:02:15.787841] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.365 [2024-07-11 14:02:15.788125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.365 [2024-07-11 14:02:15.788308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.365 [2024-07-11 14:02:15.788321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.365 [2024-07-11 14:02:15.788328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.365 [2024-07-11 14:02:15.788423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.365 [2024-07-11 14:02:15.788534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.365 [2024-07-11 14:02:15.788542] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.365 [2024-07-11 14:02:15.788548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.365 [2024-07-11 14:02:15.790282] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.365 [2024-07-11 14:02:15.799892] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.365 [2024-07-11 14:02:15.800202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.365 [2024-07-11 14:02:15.800463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.365 [2024-07-11 14:02:15.800475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.365 [2024-07-11 14:02:15.800483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.365 [2024-07-11 14:02:15.800582] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.365 [2024-07-11 14:02:15.800711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.365 [2024-07-11 14:02:15.800721] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.365 [2024-07-11 14:02:15.800728] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.365 [2024-07-11 14:02:15.802517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.365 [2024-07-11 14:02:15.812005] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.365 [2024-07-11 14:02:15.812421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.365 [2024-07-11 14:02:15.812632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.365 [2024-07-11 14:02:15.812644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.365 [2024-07-11 14:02:15.812652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.365 [2024-07-11 14:02:15.812815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.365 [2024-07-11 14:02:15.812978] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.365 [2024-07-11 14:02:15.812988] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.365 [2024-07-11 14:02:15.812994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.365 [2024-07-11 14:02:15.814810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.625 [2024-07-11 14:02:15.823996] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.625 [2024-07-11 14:02:15.824379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.625 [2024-07-11 14:02:15.824646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.625 [2024-07-11 14:02:15.824658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.626 [2024-07-11 14:02:15.824666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.626 [2024-07-11 14:02:15.824800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.626 [2024-07-11 14:02:15.824933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.626 [2024-07-11 14:02:15.824946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.626 [2024-07-11 14:02:15.824953] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.626 [2024-07-11 14:02:15.826728] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.626 [2024-07-11 14:02:15.836052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.626 [2024-07-11 14:02:15.836474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.836702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.836713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.626 [2024-07-11 14:02:15.836721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.626 [2024-07-11 14:02:15.836793] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.626 [2024-07-11 14:02:15.836880] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.626 [2024-07-11 14:02:15.836889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.626 [2024-07-11 14:02:15.836896] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.626 [2024-07-11 14:02:15.838821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.626 [2024-07-11 14:02:15.848169] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.626 [2024-07-11 14:02:15.848520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.848803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.848815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.626 [2024-07-11 14:02:15.848823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.626 [2024-07-11 14:02:15.848941] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.626 [2024-07-11 14:02:15.849060] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.626 [2024-07-11 14:02:15.849070] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.626 [2024-07-11 14:02:15.849076] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.626 [2024-07-11 14:02:15.850940] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.626 [2024-07-11 14:02:15.860188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.626 [2024-07-11 14:02:15.860565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.860758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.860770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.626 [2024-07-11 14:02:15.860778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.626 [2024-07-11 14:02:15.860880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.626 [2024-07-11 14:02:15.861012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.626 [2024-07-11 14:02:15.861023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.626 [2024-07-11 14:02:15.861032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.626 [2024-07-11 14:02:15.862762] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.626 [2024-07-11 14:02:15.872164] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.626 [2024-07-11 14:02:15.872507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.872779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.872791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.626 [2024-07-11 14:02:15.872799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.626 [2024-07-11 14:02:15.872916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.626 [2024-07-11 14:02:15.873050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.626 [2024-07-11 14:02:15.873060] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.626 [2024-07-11 14:02:15.873067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.626 [2024-07-11 14:02:15.874736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1783038 Killed "${NVMF_APP[@]}" "$@" 00:32:13.626 14:02:15 -- host/bdevperf.sh@36 -- # tgt_init 00:32:13.626 14:02:15 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:13.626 14:02:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:13.626 14:02:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:13.626 14:02:15 -- common/autotest_common.sh@10 -- # set +x 00:32:13.626 [2024-07-11 14:02:15.884297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.626 [2024-07-11 14:02:15.884605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 14:02:15 -- nvmf/common.sh@469 -- # nvmfpid=1784467 00:32:13.626 [2024-07-11 14:02:15.884801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.884813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.626 [2024-07-11 14:02:15.884821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.626 [2024-07-11 14:02:15.884908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.626 14:02:15 -- nvmf/common.sh@470 -- # waitforlisten 1784467 00:32:13.626 14:02:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:13.626 [2024-07-11 14:02:15.885010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.626 [2024-07-11 14:02:15.885019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.626 [2024-07-11 14:02:15.885026] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.626 14:02:15 -- common/autotest_common.sh@819 -- # '[' -z 1784467 ']' 00:32:13.626 14:02:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.626 14:02:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:13.626 14:02:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.626 14:02:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:13.626 14:02:15 -- common/autotest_common.sh@10 -- # set +x 00:32:13.626 [2024-07-11 14:02:15.886818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.626 [2024-07-11 14:02:15.896375] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.626 [2024-07-11 14:02:15.896651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.896841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.896852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.626 [2024-07-11 14:02:15.896859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.626 [2024-07-11 14:02:15.896976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.626 [2024-07-11 14:02:15.897124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.626 [2024-07-11 14:02:15.897133] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.626 [2024-07-11 14:02:15.897139] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.626 [2024-07-11 14:02:15.899217] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.626 [2024-07-11 14:02:15.908290] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.626 [2024-07-11 14:02:15.908698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.908941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.908953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.626 [2024-07-11 14:02:15.908960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.626 [2024-07-11 14:02:15.909077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.626 [2024-07-11 14:02:15.909201] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.626 [2024-07-11 14:02:15.909211] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.626 [2024-07-11 14:02:15.909218] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.626 [2024-07-11 14:02:15.911064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.626 [2024-07-11 14:02:15.920342] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.626 [2024-07-11 14:02:15.920716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.920943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.626 [2024-07-11 14:02:15.920955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.626 [2024-07-11 14:02:15.920962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.626 [2024-07-11 14:02:15.921076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.626 [2024-07-11 14:02:15.921181] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.626 [2024-07-11 14:02:15.921190] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.626 [2024-07-11 14:02:15.921197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.626 [2024-07-11 14:02:15.923168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.626 [2024-07-11 14:02:15.927274] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:13.626 [2024-07-11 14:02:15.927315] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.627 [2024-07-11 14:02:15.932157] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.627 [2024-07-11 14:02:15.932494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:15.932760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:15.932772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.627 [2024-07-11 14:02:15.932780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.627 [2024-07-11 14:02:15.932880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.627 [2024-07-11 14:02:15.932995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.627 [2024-07-11 14:02:15.933003] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.627 [2024-07-11 14:02:15.933010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.627 [2024-07-11 14:02:15.934712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.627 [2024-07-11 14:02:15.943873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.627 [2024-07-11 14:02:15.944319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:15.944513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:15.944525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.627 [2024-07-11 14:02:15.944532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.627 [2024-07-11 14:02:15.944647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.627 [2024-07-11 14:02:15.944761] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.627 [2024-07-11 14:02:15.944772] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.627 [2024-07-11 14:02:15.944778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.627 [2024-07-11 14:02:15.946501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.627 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.627 [2024-07-11 14:02:15.955777] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.627 [2024-07-11 14:02:15.956152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:15.956325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:15.956338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.627 [2024-07-11 14:02:15.956346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.627 [2024-07-11 14:02:15.956478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.627 [2024-07-11 14:02:15.956612] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.627 [2024-07-11 14:02:15.956622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.627 [2024-07-11 14:02:15.956632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.627 [2024-07-11 14:02:15.958483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.627 [2024-07-11 14:02:15.967787] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.627 [2024-07-11 14:02:15.968270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:15.968413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:15.968424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.627 [2024-07-11 14:02:15.968432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.627 [2024-07-11 14:02:15.968531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.627 [2024-07-11 14:02:15.968675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.627 [2024-07-11 14:02:15.968685] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.627 [2024-07-11 14:02:15.968691] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.627 [2024-07-11 14:02:15.970454] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.627 [2024-07-11 14:02:15.979769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.627 [2024-07-11 14:02:15.980077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:15.980300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:15.980314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.627 [2024-07-11 14:02:15.980321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.627 [2024-07-11 14:02:15.980469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.627 [2024-07-11 14:02:15.980616] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.627 [2024-07-11 14:02:15.980626] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.627 [2024-07-11 14:02:15.980633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.627 [2024-07-11 14:02:15.982354] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.627 [2024-07-11 14:02:15.984653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:13.627 [2024-07-11 14:02:15.991661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.627 [2024-07-11 14:02:15.992046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:15.992250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:15.992262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.627 [2024-07-11 14:02:15.992271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.627 [2024-07-11 14:02:15.992400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.627 [2024-07-11 14:02:15.992500] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.627 [2024-07-11 14:02:15.992510] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.627 [2024-07-11 14:02:15.992517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.627 [2024-07-11 14:02:15.994323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.627 [2024-07-11 14:02:16.003717] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.627 [2024-07-11 14:02:16.004112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:16.004306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:16.004319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.627 [2024-07-11 14:02:16.004328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.627 [2024-07-11 14:02:16.004443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.627 [2024-07-11 14:02:16.004515] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.627 [2024-07-11 14:02:16.004524] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.627 [2024-07-11 14:02:16.004531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.627 [2024-07-11 14:02:16.006415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.627 [2024-07-11 14:02:16.015791] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.627 [2024-07-11 14:02:16.016240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:16.016429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:16.016442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.627 [2024-07-11 14:02:16.016451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.627 [2024-07-11 14:02:16.016597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.627 [2024-07-11 14:02:16.016743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.627 [2024-07-11 14:02:16.016752] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.627 [2024-07-11 14:02:16.016760] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.627 [2024-07-11 14:02:16.018544] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.627 [2024-07-11 14:02:16.023938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:13.627 [2024-07-11 14:02:16.024039] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.627 [2024-07-11 14:02:16.024047] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.627 [2024-07-11 14:02:16.024054] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.627 [2024-07-11 14:02:16.024092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:13.627 [2024-07-11 14:02:16.024183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:13.627 [2024-07-11 14:02:16.024184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.627 [2024-07-11 14:02:16.027762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.627 [2024-07-11 14:02:16.028182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:16.028384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:16.028397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.627 [2024-07-11 14:02:16.028405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.627 [2024-07-11 14:02:16.028564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.627 [2024-07-11 14:02:16.028698] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.627 [2024-07-11 14:02:16.028708] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.627 [2024-07-11 14:02:16.028715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.627 [2024-07-11 14:02:16.030662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.627 [2024-07-11 14:02:16.039838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.627 [2024-07-11 14:02:16.040270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.627 [2024-07-11 14:02:16.040418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-07-11 14:02:16.040431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.628 [2024-07-11 14:02:16.040439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.628 [2024-07-11 14:02:16.040575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.628 [2024-07-11 14:02:16.040665] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.628 [2024-07-11 14:02:16.040675] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.628 [2024-07-11 14:02:16.040682] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.628 [2024-07-11 14:02:16.042513] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.628 [2024-07-11 14:02:16.051760] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.628 [2024-07-11 14:02:16.052152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-07-11 14:02:16.052306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-07-11 14:02:16.052318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.628 [2024-07-11 14:02:16.052327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.628 [2024-07-11 14:02:16.052447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.628 [2024-07-11 14:02:16.052565] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.628 [2024-07-11 14:02:16.052575] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.628 [2024-07-11 14:02:16.052582] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.628 [2024-07-11 14:02:16.054436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.628 [2024-07-11 14:02:16.063931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.628 [2024-07-11 14:02:16.064354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-07-11 14:02:16.064555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-07-11 14:02:16.064567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.628 [2024-07-11 14:02:16.064576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.628 [2024-07-11 14:02:16.064716] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.628 [2024-07-11 14:02:16.064834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.628 [2024-07-11 14:02:16.064845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.628 [2024-07-11 14:02:16.064852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.628 [2024-07-11 14:02:16.066843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.628 [2024-07-11 14:02:16.076054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.628 [2024-07-11 14:02:16.076390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-07-11 14:02:16.076540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.628 [2024-07-11 14:02:16.076553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.628 [2024-07-11 14:02:16.076562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.628 [2024-07-11 14:02:16.076653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.628 [2024-07-11 14:02:16.076773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.628 [2024-07-11 14:02:16.076782] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.628 [2024-07-11 14:02:16.076789] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.628 [2024-07-11 14:02:16.078731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.888 [2024-07-11 14:02:16.088113] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.888 [2024-07-11 14:02:16.088364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.888 [2024-07-11 14:02:16.088585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.888 [2024-07-11 14:02:16.088597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.889 [2024-07-11 14:02:16.088606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.889 [2024-07-11 14:02:16.088756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.889 [2024-07-11 14:02:16.088889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.889 [2024-07-11 14:02:16.088899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.889 [2024-07-11 14:02:16.088905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.889 [2024-07-11 14:02:16.090576] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.889 [2024-07-11 14:02:16.099916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.889 [2024-07-11 14:02:16.100314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.100603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.100616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.889 [2024-07-11 14:02:16.100624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.889 [2024-07-11 14:02:16.100727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.889 [2024-07-11 14:02:16.100849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.889 [2024-07-11 14:02:16.100860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.889 [2024-07-11 14:02:16.100866] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.889 [2024-07-11 14:02:16.102819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.889 [2024-07-11 14:02:16.111911] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.889 [2024-07-11 14:02:16.112288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.112508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.112521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.889 [2024-07-11 14:02:16.112528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.889 [2024-07-11 14:02:16.112658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.889 [2024-07-11 14:02:16.112758] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.889 [2024-07-11 14:02:16.112768] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.889 [2024-07-11 14:02:16.112776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.889 [2024-07-11 14:02:16.114595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.889 [2024-07-11 14:02:16.123927] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.889 [2024-07-11 14:02:16.124320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.124509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.124522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.889 [2024-07-11 14:02:16.124529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.889 [2024-07-11 14:02:16.124617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.889 [2024-07-11 14:02:16.124704] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.889 [2024-07-11 14:02:16.124713] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.889 [2024-07-11 14:02:16.124720] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.889 [2024-07-11 14:02:16.126493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.889 [2024-07-11 14:02:16.135929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.889 [2024-07-11 14:02:16.136320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.136473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.136484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.889 [2024-07-11 14:02:16.136492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.889 [2024-07-11 14:02:16.136640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.889 [2024-07-11 14:02:16.136712] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.889 [2024-07-11 14:02:16.136722] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.889 [2024-07-11 14:02:16.136734] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.889 [2024-07-11 14:02:16.138508] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.889 [2024-07-11 14:02:16.147715] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.889 [2024-07-11 14:02:16.148170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.148380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.148392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.889 [2024-07-11 14:02:16.148401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.889 [2024-07-11 14:02:16.148472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.889 [2024-07-11 14:02:16.148575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.889 [2024-07-11 14:02:16.148584] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.889 [2024-07-11 14:02:16.148591] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.889 [2024-07-11 14:02:16.150182] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.889 [2024-07-11 14:02:16.159803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.889 [2024-07-11 14:02:16.160187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.160329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.160342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.889 [2024-07-11 14:02:16.160349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.889 [2024-07-11 14:02:16.160483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.889 [2024-07-11 14:02:16.160585] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.889 [2024-07-11 14:02:16.160595] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.889 [2024-07-11 14:02:16.160601] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.889 [2024-07-11 14:02:16.162241] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.889 [2024-07-11 14:02:16.171738] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.889 [2024-07-11 14:02:16.172179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.172314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.172325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.889 [2024-07-11 14:02:16.172333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.889 [2024-07-11 14:02:16.172481] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.889 [2024-07-11 14:02:16.172613] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.889 [2024-07-11 14:02:16.172623] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.889 [2024-07-11 14:02:16.172634] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.889 [2024-07-11 14:02:16.174329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.889 [2024-07-11 14:02:16.183690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.889 [2024-07-11 14:02:16.184074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.184316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.184328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.889 [2024-07-11 14:02:16.184336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.889 [2024-07-11 14:02:16.184453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.889 [2024-07-11 14:02:16.184540] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.889 [2024-07-11 14:02:16.184549] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.889 [2024-07-11 14:02:16.184556] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.889 [2024-07-11 14:02:16.186416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.889 [2024-07-11 14:02:16.195763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.889 [2024-07-11 14:02:16.196170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.196365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.196378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.889 [2024-07-11 14:02:16.196385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.889 [2024-07-11 14:02:16.196503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.889 [2024-07-11 14:02:16.196651] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.889 [2024-07-11 14:02:16.196661] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.889 [2024-07-11 14:02:16.196667] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.889 [2024-07-11 14:02:16.198470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.889 [2024-07-11 14:02:16.207838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.889 [2024-07-11 14:02:16.208185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.208428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.889 [2024-07-11 14:02:16.208440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.890 [2024-07-11 14:02:16.208447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.890 [2024-07-11 14:02:16.208565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.890 [2024-07-11 14:02:16.208652] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.890 [2024-07-11 14:02:16.208660] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.890 [2024-07-11 14:02:16.208667] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.890 [2024-07-11 14:02:16.210317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.890 [2024-07-11 14:02:16.219744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.890 [2024-07-11 14:02:16.220150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.220302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.220315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.890 [2024-07-11 14:02:16.220323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.890 [2024-07-11 14:02:16.220456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.890 [2024-07-11 14:02:16.220589] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.890 [2024-07-11 14:02:16.220599] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.890 [2024-07-11 14:02:16.220606] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.890 [2024-07-11 14:02:16.222472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.890 [2024-07-11 14:02:16.231740] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.890 [2024-07-11 14:02:16.232145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.232353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.232366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.890 [2024-07-11 14:02:16.232374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.890 [2024-07-11 14:02:16.232506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.890 [2024-07-11 14:02:16.232638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.890 [2024-07-11 14:02:16.232648] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.890 [2024-07-11 14:02:16.232655] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.890 [2024-07-11 14:02:16.234592] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.890 [2024-07-11 14:02:16.243644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.890 [2024-07-11 14:02:16.244036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.244223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.244235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.890 [2024-07-11 14:02:16.244243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.890 [2024-07-11 14:02:16.244391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.890 [2024-07-11 14:02:16.244524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.890 [2024-07-11 14:02:16.244534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.890 [2024-07-11 14:02:16.244540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.890 [2024-07-11 14:02:16.246310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.890 [2024-07-11 14:02:16.255652] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.890 [2024-07-11 14:02:16.256065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.256306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.256318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.890 [2024-07-11 14:02:16.256326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.890 [2024-07-11 14:02:16.256459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.890 [2024-07-11 14:02:16.256546] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.890 [2024-07-11 14:02:16.256556] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.890 [2024-07-11 14:02:16.256562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.890 [2024-07-11 14:02:16.258423] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.890 [2024-07-11 14:02:16.267609] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.890 [2024-07-11 14:02:16.267986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.268193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.268207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.890 [2024-07-11 14:02:16.268215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.890 [2024-07-11 14:02:16.268332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.890 [2024-07-11 14:02:16.268420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.890 [2024-07-11 14:02:16.268428] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.890 [2024-07-11 14:02:16.268435] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.890 [2024-07-11 14:02:16.270371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.890 [2024-07-11 14:02:16.279680] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.890 [2024-07-11 14:02:16.280102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.280371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.280383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.890 [2024-07-11 14:02:16.280391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.890 [2024-07-11 14:02:16.280524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.890 [2024-07-11 14:02:16.280611] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.890 [2024-07-11 14:02:16.280619] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.890 [2024-07-11 14:02:16.280626] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.890 [2024-07-11 14:02:16.282396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.890 [2024-07-11 14:02:16.291670] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.890 [2024-07-11 14:02:16.292057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.292303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.292316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.890 [2024-07-11 14:02:16.292324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.890 [2024-07-11 14:02:16.292442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.890 [2024-07-11 14:02:16.292560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.890 [2024-07-11 14:02:16.292568] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.890 [2024-07-11 14:02:16.292576] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.890 [2024-07-11 14:02:16.294288] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.890 [2024-07-11 14:02:16.303731] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.890 [2024-07-11 14:02:16.304155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.304441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.304454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.890 [2024-07-11 14:02:16.304461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.890 [2024-07-11 14:02:16.304564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.890 [2024-07-11 14:02:16.304651] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.890 [2024-07-11 14:02:16.304660] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.890 [2024-07-11 14:02:16.304667] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.890 [2024-07-11 14:02:16.306470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.890 [2024-07-11 14:02:16.315905] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.890 [2024-07-11 14:02:16.316325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.316515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.316527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.890 [2024-07-11 14:02:16.316534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.890 [2024-07-11 14:02:16.316651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.890 [2024-07-11 14:02:16.316799] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.890 [2024-07-11 14:02:16.316810] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.890 [2024-07-11 14:02:16.316817] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.890 [2024-07-11 14:02:16.318588] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.890 [2024-07-11 14:02:16.327879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.890 [2024-07-11 14:02:16.328271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.328462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.890 [2024-07-11 14:02:16.328474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.890 [2024-07-11 14:02:16.328485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.890 [2024-07-11 14:02:16.328572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.891 [2024-07-11 14:02:16.328689] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.891 [2024-07-11 14:02:16.328698] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.891 [2024-07-11 14:02:16.328705] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.891 [2024-07-11 14:02:16.330384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:13.891 [2024-07-11 14:02:16.339969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:13.891 [2024-07-11 14:02:16.340328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.891 [2024-07-11 14:02:16.340526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.891 [2024-07-11 14:02:16.340538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:13.891 [2024-07-11 14:02:16.340546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:13.891 [2024-07-11 14:02:16.340632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:13.891 [2024-07-11 14:02:16.340735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:13.891 [2024-07-11 14:02:16.340744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:13.891 [2024-07-11 14:02:16.340750] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:13.891 [2024-07-11 14:02:16.342674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.151 [2024-07-11 14:02:16.351834] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.151 [2024-07-11 14:02:16.352216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.151 [2024-07-11 14:02:16.352401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.151 [2024-07-11 14:02:16.352413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.151 [2024-07-11 14:02:16.352420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.151 [2024-07-11 14:02:16.352523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.151 [2024-07-11 14:02:16.352610] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.151 [2024-07-11 14:02:16.352618] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.151 [2024-07-11 14:02:16.352625] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.151 [2024-07-11 14:02:16.354397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.151 [2024-07-11 14:02:16.363765] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.151 [2024-07-11 14:02:16.364192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.151 [2024-07-11 14:02:16.364464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.151 [2024-07-11 14:02:16.364476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.151 [2024-07-11 14:02:16.364487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.151 [2024-07-11 14:02:16.364620] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.151 [2024-07-11 14:02:16.364752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.151 [2024-07-11 14:02:16.364763] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.151 [2024-07-11 14:02:16.364769] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.151 [2024-07-11 14:02:16.366615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.151 [2024-07-11 14:02:16.375769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.151 [2024-07-11 14:02:16.376189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.151 [2024-07-11 14:02:16.376456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.151 [2024-07-11 14:02:16.376468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.151 [2024-07-11 14:02:16.376475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.151 [2024-07-11 14:02:16.376623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.151 [2024-07-11 14:02:16.376740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.151 [2024-07-11 14:02:16.376750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.151 [2024-07-11 14:02:16.376757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.151 [2024-07-11 14:02:16.378708] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.152 [2024-07-11 14:02:16.387758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.152 [2024-07-11 14:02:16.388177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.388416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.388429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.152 [2024-07-11 14:02:16.388436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.152 [2024-07-11 14:02:16.388553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.152 [2024-07-11 14:02:16.388672] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.152 [2024-07-11 14:02:16.388682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.152 [2024-07-11 14:02:16.388688] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.152 [2024-07-11 14:02:16.390578] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.152 [2024-07-11 14:02:16.399574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.152 [2024-07-11 14:02:16.399998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.400249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.400263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.152 [2024-07-11 14:02:16.400271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.152 [2024-07-11 14:02:16.400425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.152 [2024-07-11 14:02:16.400543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.152 [2024-07-11 14:02:16.400552] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.152 [2024-07-11 14:02:16.400559] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.152 [2024-07-11 14:02:16.402436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.152 [2024-07-11 14:02:16.411589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.152 [2024-07-11 14:02:16.411909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.412176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.412188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.152 [2024-07-11 14:02:16.412197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.152 [2024-07-11 14:02:16.412314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.152 [2024-07-11 14:02:16.412418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.152 [2024-07-11 14:02:16.412427] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.152 [2024-07-11 14:02:16.412433] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.152 [2024-07-11 14:02:16.414209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.152 [2024-07-11 14:02:16.423581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.152 [2024-07-11 14:02:16.423953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.424224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.424236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.152 [2024-07-11 14:02:16.424244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.152 [2024-07-11 14:02:16.424377] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.152 [2024-07-11 14:02:16.424494] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.152 [2024-07-11 14:02:16.424504] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.152 [2024-07-11 14:02:16.424511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.152 [2024-07-11 14:02:16.426310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.152 [2024-07-11 14:02:16.435623] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.152 [2024-07-11 14:02:16.436033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.436283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.436296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.152 [2024-07-11 14:02:16.436303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.152 [2024-07-11 14:02:16.436421] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.152 [2024-07-11 14:02:16.436541] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.152 [2024-07-11 14:02:16.436551] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.152 [2024-07-11 14:02:16.436558] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.152 [2024-07-11 14:02:16.438406] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.152 [2024-07-11 14:02:16.447531] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.152 [2024-07-11 14:02:16.447932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.448214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.448227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.152 [2024-07-11 14:02:16.448234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.152 [2024-07-11 14:02:16.448338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.152 [2024-07-11 14:02:16.448425] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.152 [2024-07-11 14:02:16.448433] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.152 [2024-07-11 14:02:16.448440] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.152 [2024-07-11 14:02:16.450286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.152 [2024-07-11 14:02:16.459517] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.152 [2024-07-11 14:02:16.459946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.460139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.460151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.152 [2024-07-11 14:02:16.460164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.152 [2024-07-11 14:02:16.460282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.152 [2024-07-11 14:02:16.460430] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.152 [2024-07-11 14:02:16.460440] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.152 [2024-07-11 14:02:16.460446] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.152 [2024-07-11 14:02:16.462306] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.152 [2024-07-11 14:02:16.471600] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.152 [2024-07-11 14:02:16.471905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.472149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.472165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.152 [2024-07-11 14:02:16.472173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.152 [2024-07-11 14:02:16.472275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.152 [2024-07-11 14:02:16.472422] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.152 [2024-07-11 14:02:16.472436] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.152 [2024-07-11 14:02:16.472442] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.152 [2024-07-11 14:02:16.474320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.152 [2024-07-11 14:02:16.483638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.152 [2024-07-11 14:02:16.484044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.484290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.484303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.152 [2024-07-11 14:02:16.484310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.152 [2024-07-11 14:02:16.484457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.152 [2024-07-11 14:02:16.484575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.152 [2024-07-11 14:02:16.484585] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.152 [2024-07-11 14:02:16.484592] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.152 [2024-07-11 14:02:16.486378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.152 [2024-07-11 14:02:16.495704] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.152 [2024-07-11 14:02:16.496098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.496260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.152 [2024-07-11 14:02:16.496273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.152 [2024-07-11 14:02:16.496280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.152 [2024-07-11 14:02:16.496382] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.152 [2024-07-11 14:02:16.496468] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.152 [2024-07-11 14:02:16.496477] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.152 [2024-07-11 14:02:16.496484] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.152 [2024-07-11 14:02:16.498299] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.152 [2024-07-11 14:02:16.507694] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.153 [2024-07-11 14:02:16.508099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.508273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.508285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.153 [2024-07-11 14:02:16.508292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.153 [2024-07-11 14:02:16.508393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.153 [2024-07-11 14:02:16.508511] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.153 [2024-07-11 14:02:16.508520] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.153 [2024-07-11 14:02:16.508529] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.153 [2024-07-11 14:02:16.510298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.153 [2024-07-11 14:02:16.519689] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.153 [2024-07-11 14:02:16.520115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.520386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.520398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.153 [2024-07-11 14:02:16.520406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.153 [2024-07-11 14:02:16.520494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.153 [2024-07-11 14:02:16.520642] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.153 [2024-07-11 14:02:16.520651] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.153 [2024-07-11 14:02:16.520658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.153 [2024-07-11 14:02:16.522400] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.153 [2024-07-11 14:02:16.531726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.153 [2024-07-11 14:02:16.532115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.532384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.532397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.153 [2024-07-11 14:02:16.532405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.153 [2024-07-11 14:02:16.532521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.153 [2024-07-11 14:02:16.532639] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.153 [2024-07-11 14:02:16.532649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.153 [2024-07-11 14:02:16.532656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.153 [2024-07-11 14:02:16.534533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.153 [2024-07-11 14:02:16.543851] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.153 [2024-07-11 14:02:16.544178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.544289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.544301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.153 [2024-07-11 14:02:16.544308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.153 [2024-07-11 14:02:16.544411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.153 [2024-07-11 14:02:16.544528] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.153 [2024-07-11 14:02:16.544537] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.153 [2024-07-11 14:02:16.544545] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.153 [2024-07-11 14:02:16.546377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.153 [2024-07-11 14:02:16.555790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.153 [2024-07-11 14:02:16.556167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.556417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.556429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.153 [2024-07-11 14:02:16.556437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.153 [2024-07-11 14:02:16.556540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.153 [2024-07-11 14:02:16.556658] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.153 [2024-07-11 14:02:16.556666] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.153 [2024-07-11 14:02:16.556673] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.153 [2024-07-11 14:02:16.558520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.153 [2024-07-11 14:02:16.567807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.153 [2024-07-11 14:02:16.568165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.568318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.568330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.153 [2024-07-11 14:02:16.568338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.153 [2024-07-11 14:02:16.568455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.153 [2024-07-11 14:02:16.568557] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.153 [2024-07-11 14:02:16.568567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.153 [2024-07-11 14:02:16.568573] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.153 [2024-07-11 14:02:16.570525] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.153 [2024-07-11 14:02:16.579713] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.153 [2024-07-11 14:02:16.580155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.580414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.580426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.153 [2024-07-11 14:02:16.580434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.153 [2024-07-11 14:02:16.580551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.153 [2024-07-11 14:02:16.580669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.153 [2024-07-11 14:02:16.580679] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.153 [2024-07-11 14:02:16.580685] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.153 [2024-07-11 14:02:16.582623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.153 [2024-07-11 14:02:16.591630] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.153 [2024-07-11 14:02:16.592071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.592210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.592223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.153 [2024-07-11 14:02:16.592230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.153 [2024-07-11 14:02:16.592332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.153 [2024-07-11 14:02:16.592464] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.153 [2024-07-11 14:02:16.592475] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.153 [2024-07-11 14:02:16.592481] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.153 [2024-07-11 14:02:16.594237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.153 [2024-07-11 14:02:16.603820] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.153 [2024-07-11 14:02:16.604252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.604499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.153 [2024-07-11 14:02:16.604510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.153 [2024-07-11 14:02:16.604518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.153 [2024-07-11 14:02:16.604665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.153 [2024-07-11 14:02:16.604798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.153 [2024-07-11 14:02:16.604808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.153 [2024-07-11 14:02:16.604814] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.416 [2024-07-11 14:02:16.606614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.416 [2024-07-11 14:02:16.616004] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.416 [2024-07-11 14:02:16.616364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.616514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.616527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.416 [2024-07-11 14:02:16.616535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.416 [2024-07-11 14:02:16.616638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.416 [2024-07-11 14:02:16.616756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.416 [2024-07-11 14:02:16.616765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.416 [2024-07-11 14:02:16.616772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.416 [2024-07-11 14:02:16.618668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.416 [2024-07-11 14:02:16.628083] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.416 [2024-07-11 14:02:16.628416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.628610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.628622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.416 [2024-07-11 14:02:16.628629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.416 [2024-07-11 14:02:16.628747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.416 [2024-07-11 14:02:16.628849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.416 [2024-07-11 14:02:16.628859] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.416 [2024-07-11 14:02:16.628865] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.416 [2024-07-11 14:02:16.630790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.416 [2024-07-11 14:02:16.640283] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.416 [2024-07-11 14:02:16.640665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.640842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.640855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.416 [2024-07-11 14:02:16.640863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.416 [2024-07-11 14:02:16.640995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.416 [2024-07-11 14:02:16.641097] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.416 [2024-07-11 14:02:16.641106] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.416 [2024-07-11 14:02:16.641114] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.416 [2024-07-11 14:02:16.642782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.416 [2024-07-11 14:02:16.652138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.416 [2024-07-11 14:02:16.652541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.652808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.652821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.416 [2024-07-11 14:02:16.652828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.416 [2024-07-11 14:02:16.652931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.416 [2024-07-11 14:02:16.653048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.416 [2024-07-11 14:02:16.653057] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.416 [2024-07-11 14:02:16.653063] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.416 [2024-07-11 14:02:16.654789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.416 [2024-07-11 14:02:16.664011] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.416 [2024-07-11 14:02:16.664457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.664701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.664716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.416 [2024-07-11 14:02:16.664725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.416 [2024-07-11 14:02:16.664858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.416 [2024-07-11 14:02:16.664991] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.416 [2024-07-11 14:02:16.665002] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.416 [2024-07-11 14:02:16.665009] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.416 [2024-07-11 14:02:16.666948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.416 [2024-07-11 14:02:16.676000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.416 [2024-07-11 14:02:16.676410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.676676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.676689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.416 [2024-07-11 14:02:16.676697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.416 [2024-07-11 14:02:16.676830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.416 [2024-07-11 14:02:16.676933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.416 [2024-07-11 14:02:16.676943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.416 [2024-07-11 14:02:16.676950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.416 [2024-07-11 14:02:16.678783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.416 [2024-07-11 14:02:16.687977] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.416 [2024-07-11 14:02:16.688446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.688713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.688725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.416 [2024-07-11 14:02:16.688734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.416 [2024-07-11 14:02:16.688852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.416 [2024-07-11 14:02:16.689001] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.416 [2024-07-11 14:02:16.689011] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.416 [2024-07-11 14:02:16.689017] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.416 [2024-07-11 14:02:16.690743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.416 [2024-07-11 14:02:16.699926] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.416 [2024-07-11 14:02:16.700305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.700492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.700505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.416 [2024-07-11 14:02:16.700516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.416 [2024-07-11 14:02:16.700618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.416 [2024-07-11 14:02:16.700722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.416 [2024-07-11 14:02:16.700732] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.416 [2024-07-11 14:02:16.700739] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.416 [2024-07-11 14:02:16.702511] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.416 [2024-07-11 14:02:16.711877] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.416 [2024-07-11 14:02:16.712267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.712443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.416 [2024-07-11 14:02:16.712455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.416 [2024-07-11 14:02:16.712463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.416 [2024-07-11 14:02:16.712596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.416 [2024-07-11 14:02:16.712714] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.417 [2024-07-11 14:02:16.712723] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.417 [2024-07-11 14:02:16.712731] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.417 [2024-07-11 14:02:16.714417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.417 [2024-07-11 14:02:16.723950] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.417 [2024-07-11 14:02:16.724311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.724581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.724593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.417 [2024-07-11 14:02:16.724600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.417 [2024-07-11 14:02:16.724702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.417 [2024-07-11 14:02:16.724775] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.417 [2024-07-11 14:02:16.724783] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.417 [2024-07-11 14:02:16.724790] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.417 [2024-07-11 14:02:16.726546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.417 [2024-07-11 14:02:16.735962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.417 [2024-07-11 14:02:16.736324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 14:02:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:14.417 [2024-07-11 14:02:16.736530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.736542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.417 [2024-07-11 14:02:16.736550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.417 14:02:16 -- common/autotest_common.sh@852 -- # return 0 00:32:14.417 [2024-07-11 14:02:16.736686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.417 [2024-07-11 14:02:16.736773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.417 [2024-07-11 14:02:16.736783] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.417 [2024-07-11 14:02:16.736789] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.417 14:02:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:14.417 14:02:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:14.417 14:02:16 -- common/autotest_common.sh@10 -- # set +x 00:32:14.417 [2024-07-11 14:02:16.738635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.417 [2024-07-11 14:02:16.748007] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.417 [2024-07-11 14:02:16.748277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.748499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.748511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.417 [2024-07-11 14:02:16.748519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.417 [2024-07-11 14:02:16.748606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.417 [2024-07-11 14:02:16.748739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.417 [2024-07-11 14:02:16.748750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.417 [2024-07-11 14:02:16.748758] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.417 [2024-07-11 14:02:16.750456] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.417 [2024-07-11 14:02:16.759933] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.417 [2024-07-11 14:02:16.760242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.760345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.760357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.417 [2024-07-11 14:02:16.760365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.417 [2024-07-11 14:02:16.760482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.417 [2024-07-11 14:02:16.760615] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.417 [2024-07-11 14:02:16.760625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.417 [2024-07-11 14:02:16.760632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.417 [2024-07-11 14:02:16.762495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.417 [2024-07-11 14:02:16.771972] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.417 14:02:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.417 [2024-07-11 14:02:16.772322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 14:02:16 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:14.417 [2024-07-11 14:02:16.772474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.772492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.417 [2024-07-11 14:02:16.772499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.417 [2024-07-11 14:02:16.772617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.417 [2024-07-11 14:02:16.772734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.417 [2024-07-11 14:02:16.772746] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.417 [2024-07-11 14:02:16.772752] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.417 14:02:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.417 14:02:16 -- common/autotest_common.sh@10 -- # set +x 00:32:14.417 [2024-07-11 14:02:16.774570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.417 [2024-07-11 14:02:16.775890] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.417 14:02:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.417 14:02:16 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:14.417 14:02:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.417 14:02:16 -- common/autotest_common.sh@10 -- # set +x 00:32:14.417 [2024-07-11 14:02:16.783996] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.417 [2024-07-11 14:02:16.784303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.784590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.784602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.417 [2024-07-11 14:02:16.784609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.417 [2024-07-11 14:02:16.784772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.417 [2024-07-11 14:02:16.784860] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.417 [2024-07-11 14:02:16.784869] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.417 [2024-07-11 14:02:16.784876] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.417 [2024-07-11 14:02:16.786681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.417 [2024-07-11 14:02:16.795945] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.417 [2024-07-11 14:02:16.796292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.796493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.796504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.417 [2024-07-11 14:02:16.796512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.417 [2024-07-11 14:02:16.796614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.417 [2024-07-11 14:02:16.796686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.417 [2024-07-11 14:02:16.796695] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.417 [2024-07-11 14:02:16.796703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.417 [2024-07-11 14:02:16.798463] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.417 [2024-07-11 14:02:16.807919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.417 [2024-07-11 14:02:16.808216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.808494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.808507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.417 [2024-07-11 14:02:16.808515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.417 [2024-07-11 14:02:16.808663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.417 [2024-07-11 14:02:16.808750] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.417 [2024-07-11 14:02:16.808760] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.417 [2024-07-11 14:02:16.808767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.417 [2024-07-11 14:02:16.810483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.417 Malloc0 00:32:14.417 14:02:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.417 14:02:16 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:14.417 14:02:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.417 14:02:16 -- common/autotest_common.sh@10 -- # set +x 00:32:14.417 [2024-07-11 14:02:16.819817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.417 [2024-07-11 14:02:16.820209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.820345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.820357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.417 [2024-07-11 14:02:16.820365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.417 [2024-07-11 14:02:16.820482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.417 [2024-07-11 14:02:16.820615] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.417 [2024-07-11 14:02:16.820625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.417 [2024-07-11 14:02:16.820632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.417 [2024-07-11 14:02:16.822448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.417 14:02:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.417 14:02:16 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:14.417 14:02:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.417 14:02:16 -- common/autotest_common.sh@10 -- # set +x 00:32:14.417 [2024-07-11 14:02:16.831755] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.417 [2024-07-11 14:02:16.832108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.832370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.417 [2024-07-11 14:02:16.832383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bfd90 with addr=10.0.0.2, port=4420 00:32:14.418 [2024-07-11 14:02:16.832390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bfd90 is same with the state(5) to be set 00:32:14.418 [2024-07-11 14:02:16.832538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bfd90 (9): Bad file descriptor 00:32:14.418 [2024-07-11 14:02:16.832701] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:14.418 [2024-07-11 14:02:16.832715] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:14.418 [2024-07-11 14:02:16.832722] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:14.418 [2024-07-11 14:02:16.834479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.418 14:02:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.418 14:02:16 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.418 14:02:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.418 14:02:16 -- common/autotest_common.sh@10 -- # set +x 00:32:14.418 [2024-07-11 14:02:16.841820] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.418 [2024-07-11 14:02:16.843621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:14.418 14:02:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.418 14:02:16 -- host/bdevperf.sh@38 -- # wait 1783493 00:32:14.678 [2024-07-11 14:02:16.902244] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:22.801 00:32:22.801 Latency(us) 00:32:22.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.801 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:22.801 Verification LBA range: start 0x0 length 0x4000 00:32:22.801 Nvme1n1 : 15.00 12394.76 48.42 19119.35 0.00 4049.94 562.75 16526.47 00:32:22.801 =================================================================================================================== 00:32:22.801 Total : 12394.76 48.42 19119.35 0.00 4049.94 562.75 16526.47 00:32:23.060 14:02:25 -- host/bdevperf.sh@39 -- # sync 00:32:23.060 14:02:25 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:23.060 14:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.060 14:02:25 -- common/autotest_common.sh@10 -- # set +x 00:32:23.060 14:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.060 14:02:25 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:23.060 14:02:25 -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:23.060 14:02:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:23.060 14:02:25 -- nvmf/common.sh@116 -- # sync 00:32:23.060 14:02:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:23.060 14:02:25 -- nvmf/common.sh@119 -- # set +e 00:32:23.060 14:02:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:23.060 14:02:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:23.060 rmmod nvme_tcp 00:32:23.060 rmmod nvme_fabrics 00:32:23.060 rmmod nvme_keyring 00:32:23.060 14:02:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:23.060 14:02:25 -- nvmf/common.sh@123 -- # set -e 00:32:23.060 14:02:25 -- nvmf/common.sh@124 -- # return 0 00:32:23.060 14:02:25 -- nvmf/common.sh@477 -- # '[' -n 1784467 ']' 00:32:23.060 14:02:25 -- nvmf/common.sh@478 -- # killprocess 1784467 00:32:23.060 14:02:25 -- common/autotest_common.sh@926 -- # '[' -z 1784467 ']' 00:32:23.060 14:02:25 -- common/autotest_common.sh@930 -- # kill -0 1784467 00:32:23.060 14:02:25 -- common/autotest_common.sh@931 -- # uname 00:32:23.060 14:02:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:23.060 14:02:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1784467 00:32:23.060 14:02:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:23.060 14:02:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:23.060 14:02:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1784467' 00:32:23.060 killing process with pid 1784467 00:32:23.060 14:02:25 -- common/autotest_common.sh@945 -- # kill 1784467 00:32:23.060 14:02:25 -- common/autotest_common.sh@950 -- # wait 1784467 00:32:23.320 14:02:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:23.320 14:02:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:23.320 14:02:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:23.320 14:02:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:23.320 14:02:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:23.320 14:02:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.320 14:02:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:23.320 14:02:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.856 14:02:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:25.856 00:32:25.856 real 0m25.396s 00:32:25.856 user 1m1.975s 00:32:25.856 sys 0m5.935s 00:32:25.856 14:02:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:25.856 14:02:27 -- common/autotest_common.sh@10 -- # set +x 00:32:25.856 ************************************ 00:32:25.856 END TEST nvmf_bdevperf 00:32:25.856 ************************************ 00:32:25.856 14:02:27 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:25.856 14:02:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:25.856 14:02:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:25.856 14:02:27 -- common/autotest_common.sh@10 -- # set +x 00:32:25.856 ************************************ 00:32:25.856 START TEST nvmf_target_disconnect 00:32:25.856 ************************************ 00:32:25.856 14:02:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:25.856 * Looking for test storage... 00:32:25.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:25.856 14:02:27 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.856 14:02:27 -- nvmf/common.sh@7 -- # uname -s 00:32:25.856 14:02:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.856 14:02:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.856 14:02:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.856 14:02:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.856 14:02:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.856 14:02:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.856 14:02:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.856 14:02:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.856 14:02:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.856 14:02:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.856 14:02:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:25.856 14:02:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:25.856 14:02:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.856 14:02:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.856 14:02:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.856 14:02:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.856 14:02:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.856 14:02:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.856 14:02:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.856 14:02:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.856 14:02:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.857 14:02:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.857 14:02:27 -- paths/export.sh@5 -- # export PATH 00:32:25.857 14:02:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.857 14:02:27 -- nvmf/common.sh@46 -- # : 0 00:32:25.857 14:02:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:25.857 14:02:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:25.857 14:02:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:25.857 14:02:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.857 14:02:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.857 14:02:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:25.857 14:02:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:25.857 14:02:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:25.857 14:02:27 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:25.857 14:02:27 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:25.857 14:02:27 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:25.857 14:02:27 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:32:25.857 14:02:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:25.857 14:02:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.857 14:02:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:25.857 14:02:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:25.857 14:02:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:25.857 14:02:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.857 14:02:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:25.857 14:02:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.857 14:02:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:25.857 14:02:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:25.857 14:02:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:25.857 14:02:27 -- common/autotest_common.sh@10 -- # set +x 00:32:31.135 14:02:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:31.135 14:02:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:31.135 14:02:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:31.135 14:02:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:31.135 14:02:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:31.135 14:02:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:31.135 14:02:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:31.135 14:02:32 -- nvmf/common.sh@294 -- # net_devs=() 00:32:31.135 14:02:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:31.135 14:02:32 -- nvmf/common.sh@295 -- # e810=() 00:32:31.135 14:02:32 -- nvmf/common.sh@295 -- # local -ga e810 00:32:31.135 14:02:32 -- nvmf/common.sh@296 -- # x722=() 00:32:31.135 14:02:32 -- nvmf/common.sh@296 -- # local -ga x722 00:32:31.135 14:02:32 -- nvmf/common.sh@297 -- # mlx=() 00:32:31.135 14:02:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:31.135 14:02:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:31.135 14:02:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:31.135 14:02:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:31.135 14:02:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:31.135 14:02:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:31.135 14:02:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:31.135 14:02:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:31.135 14:02:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:31.135 14:02:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:31.135 14:02:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:31.135 14:02:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:31.135 14:02:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:31.135 14:02:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:31.135 14:02:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:31.135 14:02:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:31.135 14:02:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:31.135 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:31.135 14:02:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:31.135 14:02:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:31.135 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:31.135 14:02:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:31.135 14:02:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:31.135 14:02:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:31.136 14:02:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:31.136 14:02:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:31.136 14:02:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.136 14:02:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:31.136 14:02:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.136 14:02:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:31.136 Found net devices under 0000:86:00.0: cvl_0_0 00:32:31.136 14:02:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.136 14:02:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:31.136 14:02:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.136 14:02:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:31.136 14:02:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.136 14:02:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:31.136 Found net devices under 0000:86:00.1: cvl_0_1 00:32:31.136 14:02:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.136 14:02:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:31.136 14:02:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:31.136 14:02:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:31.136 14:02:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:31.136 14:02:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:31.136 14:02:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:31.136 14:02:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:31.136 14:02:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:31.136 14:02:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:31.136 14:02:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:31.136 14:02:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:31.136 14:02:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:31.136 14:02:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:31.136 14:02:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:31.136 14:02:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:31.136 14:02:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:31.136 14:02:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:31.136 14:02:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:31.136 14:02:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:31.136 14:02:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:31.136 14:02:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:31.136 14:02:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:31.136 14:02:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:31.136 14:02:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:31.136 14:02:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:31.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:32:31.136 00:32:31.136 --- 10.0.0.2 ping statistics --- 00:32:31.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.136 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:32:31.136 14:02:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:31.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:32:31.136 00:32:31.136 --- 10.0.0.1 ping statistics --- 00:32:31.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.136 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:32:31.136 14:02:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.136 14:02:33 -- nvmf/common.sh@410 -- # return 0 00:32:31.136 14:02:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:31.136 14:02:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.136 14:02:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:31.136 14:02:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:31.136 14:02:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.136 14:02:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:31.136 14:02:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:31.136 14:02:33 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:31.136 14:02:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:31.136 14:02:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:31.136 14:02:33 -- common/autotest_common.sh@10 -- # set +x 00:32:31.136 ************************************ 00:32:31.136 START TEST nvmf_target_disconnect_tc1 00:32:31.136 ************************************ 00:32:31.136 14:02:33 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:32:31.136 14:02:33 -- host/target_disconnect.sh@32 -- # set +e 00:32:31.136 14:02:33 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:31.136 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.136 [2024-07-11 14:02:33.263998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.136 [2024-07-11 14:02:33.264388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:31.136 [2024-07-11 14:02:33.264403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c515c0 with addr=10.0.0.2, port=4420 00:32:31.136 [2024-07-11 14:02:33.264422] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:31.136 [2024-07-11 14:02:33.264452] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:31.136 [2024-07-11 14:02:33.264459] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:32:31.136 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:31.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:31.136 Initializing NVMe Controllers 00:32:31.136 14:02:33 -- host/target_disconnect.sh@33 -- # trap - ERR 00:32:31.136 14:02:33 -- host/target_disconnect.sh@33 -- # print_backtrace 00:32:31.136 14:02:33 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:32:31.136 14:02:33 -- common/autotest_common.sh@1132 -- # return 0 00:32:31.136 14:02:33 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:32:31.136 14:02:33 -- host/target_disconnect.sh@41 -- # set -e 00:32:31.136 00:32:31.136 real 0m0.092s 00:32:31.136 user 0m0.033s 00:32:31.136 sys 0m0.059s 00:32:31.136 14:02:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:31.136 14:02:33 -- common/autotest_common.sh@10 -- # set +x 00:32:31.136 ************************************ 00:32:31.136 END TEST nvmf_target_disconnect_tc1 00:32:31.136 ************************************ 00:32:31.136 14:02:33 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:31.136 14:02:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:31.136 14:02:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:31.136 14:02:33 -- common/autotest_common.sh@10 -- # set +x 00:32:31.136 ************************************ 00:32:31.136 START TEST nvmf_target_disconnect_tc2 00:32:31.136 ************************************ 00:32:31.136 14:02:33 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:32:31.136 14:02:33 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:32:31.136 14:02:33 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:31.136 14:02:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:31.136 14:02:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:31.136 14:02:33 -- common/autotest_common.sh@10 -- # set +x 00:32:31.136 14:02:33 -- nvmf/common.sh@469 -- # nvmfpid=1789554 00:32:31.136 14:02:33 -- nvmf/common.sh@470 -- # waitforlisten 1789554 00:32:31.136 14:02:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:31.136 14:02:33 -- common/autotest_common.sh@819 -- # '[' -z 1789554 ']' 00:32:31.136 14:02:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.136 14:02:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:31.136 14:02:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.136 14:02:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:31.136 14:02:33 -- common/autotest_common.sh@10 -- # set +x 00:32:31.136 [2024-07-11 14:02:33.363143] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:31.136 [2024-07-11 14:02:33.363194] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.136 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.136 [2024-07-11 14:02:33.433067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:31.136 [2024-07-11 14:02:33.472601] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:31.136 [2024-07-11 14:02:33.472709] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.136 [2024-07-11 14:02:33.472717] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.136 [2024-07-11 14:02:33.472724] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.136 [2024-07-11 14:02:33.472840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:31.136 [2024-07-11 14:02:33.472872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:31.136 [2024-07-11 14:02:33.472976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:31.136 [2024-07-11 14:02:33.472978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:32.070 14:02:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:32.070 14:02:34 -- common/autotest_common.sh@852 -- # return 0 00:32:32.070 14:02:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:32.070 14:02:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:32.070 14:02:34 -- common/autotest_common.sh@10 -- # set +x 00:32:32.070 14:02:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.070 14:02:34 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:32.070 14:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:32.070 14:02:34 -- common/autotest_common.sh@10 -- # set +x 00:32:32.070 Malloc0 00:32:32.070 14:02:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:32.070 14:02:34 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:32.070 14:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:32.070 14:02:34 -- common/autotest_common.sh@10 -- # set +x 00:32:32.070 [2024-07-11 14:02:34.221335] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.070 14:02:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:32.070 14:02:34 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:32.070 14:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:32.070 14:02:34 -- common/autotest_common.sh@10 -- # set +x 00:32:32.070 14:02:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:32.070 14:02:34 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:32.070 14:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:32.070 14:02:34 -- common/autotest_common.sh@10 -- # set +x 00:32:32.070 14:02:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:32.070 14:02:34 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.070 14:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:32.070 14:02:34 -- common/autotest_common.sh@10 -- # set +x 00:32:32.070 [2024-07-11 14:02:34.249568] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.070 14:02:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:32.070 14:02:34 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:32.070 14:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:32.070 14:02:34 -- common/autotest_common.sh@10 -- # set +x 00:32:32.070 14:02:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:32.070 14:02:34 -- host/target_disconnect.sh@50 -- # reconnectpid=1789658 00:32:32.070 14:02:34 -- host/target_disconnect.sh@52 -- # sleep 2 00:32:32.070 14:02:34 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:32.070 EAL: No free 2048 kB hugepages reported on node 1 00:32:33.981 14:02:36 -- host/target_disconnect.sh@53 -- # kill -9 1789554 00:32:33.981 14:02:36 -- host/target_disconnect.sh@55 -- # sleep 2 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 [2024-07-11 14:02:36.276601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Write completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 [2024-07-11 14:02:36.276801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.981 starting I/O failed 00:32:33.981 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 [2024-07-11 14:02:36.276987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Read completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 Write completed with error (sct=0, sc=8) 00:32:33.982 starting I/O failed 00:32:33.982 [2024-07-11 14:02:36.277182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:33.982 [2024-07-11 14:02:36.277486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.277728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.277762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.277988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.278240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.278272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.278498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.278721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.278751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.279031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.279337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.279369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.279629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.280006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.280036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.280346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.280579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.280609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.280815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.281112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.281150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.281375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.281556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.281586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.281898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.282109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.282139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.282325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.282500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.282531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.282695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.282993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.283021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.283338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.283571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.283602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.283828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.284112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.284125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.982 qpair failed and we were unable to recover it. 00:32:33.982 [2024-07-11 14:02:36.284487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.982 [2024-07-11 14:02:36.284658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.284698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.284922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.285107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.285118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.285228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.285444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.285474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.285725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.285982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.286018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.286308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.286587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.286618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.286921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.287201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.287232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.287512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.287738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.287768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.288046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.288203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.288215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.288420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.288601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.288612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.288782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.288975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.288986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.289089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.289278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.289290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.289552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.289670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.289681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.289805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.290062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.290074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.290197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.290462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.290475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.290594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.290737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.290749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.290876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.291132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.291143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.291409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.291613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.291643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.292024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.292248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.292279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.292472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.292710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.292740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.292986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.293175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.293205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.293440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.293661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.293691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.293921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.294223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.294254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.294429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.294661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.294691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.294940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.295124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.295153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.295438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.295669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.295698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.295844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.296169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.296200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.296376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.296599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.296629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.296789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.297044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.297074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.297323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.297622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.297653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.297984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.298147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.298240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.983 [2024-07-11 14:02:36.298527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.298770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.983 [2024-07-11 14:02:36.298800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.983 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.298958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.299192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.299222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.299533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.299707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.299738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.299990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.300332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.300364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.300651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.300881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.300892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.301129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.301302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.301313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.301446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.301734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.301764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.301995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.302246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.302278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.302522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.302701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.302731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.303038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.303327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.303358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.303620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.303878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.303889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.304075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.304260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.304290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.304433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.304600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.304630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.304926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.305139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.305150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.305256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.305456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.305468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.305611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.305797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.305827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.306106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.306268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.306299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.306533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.306807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.306838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.307000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.307223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.307255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.307485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.307734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.307764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.307995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.308122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.308152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.308462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.308696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.308726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.308884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.309067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.309098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.309327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.309556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.309585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.309908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.310204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.310237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.310408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.310559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.310588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.310819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.310980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.311010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.311275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.311563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.311593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.311853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.312056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.312086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.312270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.312496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.984 [2024-07-11 14:02:36.312526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.984 qpair failed and we were unable to recover it. 00:32:33.984 [2024-07-11 14:02:36.312746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.312988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.313019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.313257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.313466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.313496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.313781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.313949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.313979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.314243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.314439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.314469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.314654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.314902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.314933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.315152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.315332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.315363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.315549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.315713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.315743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.315967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.316191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.316223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.316512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.316672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.316702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.317042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.317260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.317290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.317464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.317690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.317720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.317986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.318153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.318207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.318513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.318738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.318769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.319077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.319294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.319325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.319609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.319828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.319859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.320174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.320416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.320446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.320613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.320758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.320788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.321089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.321322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.321354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.321532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.321763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.321794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.322071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.322340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.322379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.322543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.322841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.322871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.323034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.323285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.323316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.323501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.323728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.323759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.324092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.324391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.985 [2024-07-11 14:02:36.324421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.985 qpair failed and we were unable to recover it. 00:32:33.985 [2024-07-11 14:02:36.324591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.324913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.324947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.325079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.325335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.325343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.325530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.325703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.325731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.326002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.326129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.326138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.326376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.326494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.326520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.326772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.327038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.327064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.327306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.327521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.327549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.327776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.328011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.328022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.328250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.328451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.328462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.328604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.328770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.328780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.328911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.329170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.329182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.329443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.329588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.329600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.329720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.330004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.330016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.330198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.330386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.330398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.330592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.330703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.330715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.330880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.331143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.331154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.331340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.331481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.331492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.331626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.331866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.331877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.332056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.332226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.332238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.332436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.332630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.332642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.332898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.333089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.333101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.333281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.333489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.333500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.333628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.333757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.333770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.334102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.334320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.334331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.334522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.334713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.334725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.986 qpair failed and we were unable to recover it. 00:32:33.986 [2024-07-11 14:02:36.334842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.335010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.986 [2024-07-11 14:02:36.335023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.335211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.336005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.336030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.336349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.336506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.336518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.336764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.336945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.336957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.337123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.337305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.337317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.337513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.337754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.337766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.338070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.338256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.338268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.338452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.338690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.338702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.338957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.339137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.339148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.339375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.339495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.339506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.339674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.339887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.339899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.340083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.340296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.340309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.340437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.340608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.340620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.340739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.340861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.340872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.340992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.341234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.341246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.341439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.341617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.341628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.341738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.341948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.341960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.342158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.342364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.342376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.342504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.342644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.342655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.342824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.342955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.342967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.343153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.343399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.343411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.343547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.343719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.343731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.344055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.344272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.344285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.344492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.344618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.344630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.344771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.344920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.344950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.345178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.345348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.345378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.345686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.346044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.346056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.346243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.346386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.346420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.346592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.346822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.346852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.347106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.347313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.347345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.987 qpair failed and we were unable to recover it. 00:32:33.987 [2024-07-11 14:02:36.347565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.987 [2024-07-11 14:02:36.347786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.347816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.348027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.348263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.348294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.348472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.348677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.348707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.348855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.349120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.349150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.349382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.349660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.349690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.349937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.350280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.350318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.350542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.350866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.350896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.351126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.351308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.351340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.351570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.351735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.351747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.351991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.352149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.352187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.352362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.352583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.352613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.352847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.353126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.353157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.353389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.353540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.353570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.353737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.354057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.354087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.354273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.354483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.354513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.354730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.355047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.355082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.355367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.355619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.355649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.355973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.356118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.356150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.356438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.356607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.356638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.356861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.357045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.357076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.357302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.357468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.357498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.357743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.357999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.358010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.358215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.358393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.358404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.358601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.358823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.358853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.359078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.359225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.359256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.359489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.359712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.359747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.359981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.360255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.360285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.360451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.360674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.360704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.360965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.361167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.361179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.361384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.361532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.361563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.988 qpair failed and we were unable to recover it. 00:32:33.988 [2024-07-11 14:02:36.361735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.988 [2024-07-11 14:02:36.362021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.362051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.362316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.362555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.362585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.362912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.363068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.363098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.363307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.363543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.363573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.363868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.364205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.364235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.364461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.364763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.364800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.365018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.365343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.365374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.365678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.365998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.366028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.366327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.366504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.366534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.366918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.367129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.367169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.367401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.367552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.367582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.367761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.368062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.368094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.368332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.368632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.368662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.368910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.369136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.369175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.369495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.369718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.369749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.370054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.370295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.370326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.370553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.370723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.370754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.371076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.371258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.371270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.371412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.371599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.371631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.371946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.372098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.372129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.372348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.372621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.372652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.372896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.373118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.373148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.373401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.373630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.373660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.373818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.374062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.374099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.374333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.374515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.374527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.374665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.374917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.374947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.375178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.375423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.375453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.375693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.375919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.375948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.376212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.376357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.376387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.376698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.377056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.377087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.377321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.377599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.989 [2024-07-11 14:02:36.377628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.989 qpair failed and we were unable to recover it. 00:32:33.989 [2024-07-11 14:02:36.377929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.378150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.378188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.990 qpair failed and we were unable to recover it. 00:32:33.990 [2024-07-11 14:02:36.378411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.378572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.378602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.990 qpair failed and we were unable to recover it. 00:32:33.990 [2024-07-11 14:02:36.378877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.379139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.379194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.990 qpair failed and we were unable to recover it. 00:32:33.990 [2024-07-11 14:02:36.379357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.379632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.379661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.990 qpair failed and we were unable to recover it. 00:32:33.990 [2024-07-11 14:02:36.379960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.380248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.380279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.990 qpair failed and we were unable to recover it. 00:32:33.990 [2024-07-11 14:02:36.380507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.380769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.380810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.990 qpair failed and we were unable to recover it. 00:32:33.990 [2024-07-11 14:02:36.380999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.381126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.381156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.990 qpair failed and we were unable to recover it. 00:32:33.990 [2024-07-11 14:02:36.381378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.381609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.381639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.990 qpair failed and we were unable to recover it. 00:32:33.990 [2024-07-11 14:02:36.381796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.382131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.382169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.990 qpair failed and we were unable to recover it. 00:32:33.990 [2024-07-11 14:02:36.382394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.382613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.382644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.990 qpair failed and we were unable to recover it. 00:32:33.990 [2024-07-11 14:02:36.382890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.383213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.383227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.990 qpair failed and we were unable to recover it. 00:32:33.990 [2024-07-11 14:02:36.383468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.990 [2024-07-11 14:02:36.383689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.383719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.384016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.384259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.384291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.384514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.384789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.384819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.385122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.385372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.385403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.385699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.385914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.385925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.386214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.386536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.386566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.386780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.387079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.387109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.387384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.387588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.387599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.387793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.388054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.388065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.388178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.388412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.388442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.388673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.388835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.388866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.389165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.389376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.389387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.389572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.389776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.389806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.390111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.390434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.390458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.390649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.390846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.390876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.391157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.391388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.391419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.391723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.391938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.391969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.392196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.392499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.392529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.392809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.393123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.393154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.393420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.393703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.393733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.393962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.394260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.394291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.394530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.394767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.394798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.395090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.395243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.395274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.395501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.395803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.395834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.396065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.396313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.396346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.396628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.396934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.396965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.397220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.397498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.397528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.991 qpair failed and we were unable to recover it. 00:32:33.991 [2024-07-11 14:02:36.397856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.991 [2024-07-11 14:02:36.398131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.398185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.398515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.398801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.398832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.399136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.399370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.399401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.399711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.400034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.400065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.400285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.400431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.400461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.400725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.400958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.400988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.401146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.401416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.401447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.401756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.401973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.402003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.402296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.402592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.402622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.402794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.403032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.403062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.403289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.403495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.403524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.403681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.403908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.403938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.404076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.404261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.404273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.404488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.404734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.404764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.404936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.405210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.405241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.405452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.405627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.405658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.405800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.405941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.405953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.406244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.406457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.406496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.406735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.406889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.406920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.407243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.407472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.407504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.407736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.407951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.407967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.408091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.408213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.408229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.408425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.408689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.408719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.408877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.409121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.409152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.409321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.409472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.409502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.409805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.410015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.410046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.410253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.410429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.410445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.410664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.410898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.410929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.411137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.411302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.411319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.411509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.411784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.411814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.992 qpair failed and we were unable to recover it. 00:32:33.992 [2024-07-11 14:02:36.412090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.992 [2024-07-11 14:02:36.412378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.412410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.412580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.412814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.412845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.413139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.413356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.413372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.413660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.413949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.413979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.414146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.414384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.414415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.414653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.414944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.414974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.415278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.415412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.415427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.415655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.415878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.415914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.416157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.416444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.416475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.416633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.416838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.416868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.417091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.417248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.417280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.417491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.417705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.417735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.417895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.418194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.418225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.418368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.418667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.418698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.418846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.419009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.419024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.419228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.419442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.419472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.419626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.419837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.419867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.420096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.420313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.420343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.420509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.420738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.420768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.420953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.421181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.421212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.421444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.421721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.421751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.422044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.422331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.422362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.422518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.422813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.422843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.423073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.423282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.423314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.423566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.423790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.423821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.424039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.424204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.424236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.424534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.424702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.424732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.424987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.425280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.425311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.425529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.425819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.425848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.426009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.426189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.426204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.993 [2024-07-11 14:02:36.426491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.426650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.993 [2024-07-11 14:02:36.426681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.993 qpair failed and we were unable to recover it. 00:32:33.994 [2024-07-11 14:02:36.426967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.427180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.427210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.994 qpair failed and we were unable to recover it. 00:32:33.994 [2024-07-11 14:02:36.427470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.427611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.427641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.994 qpair failed and we were unable to recover it. 00:32:33.994 [2024-07-11 14:02:36.427788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.428029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.428059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.994 qpair failed and we were unable to recover it. 00:32:33.994 [2024-07-11 14:02:36.428282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.428447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.428490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.994 qpair failed and we were unable to recover it. 00:32:33.994 [2024-07-11 14:02:36.428725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.428876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.428907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.994 qpair failed and we were unable to recover it. 00:32:33.994 [2024-07-11 14:02:36.429132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.429256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.429272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.994 qpair failed and we were unable to recover it. 00:32:33.994 [2024-07-11 14:02:36.429391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.429580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:33.994 [2024-07-11 14:02:36.429595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:33.994 qpair failed and we were unable to recover it. 00:32:33.994 [2024-07-11 14:02:36.429870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.265 [2024-07-11 14:02:36.430154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.265 [2024-07-11 14:02:36.430197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.265 qpair failed and we were unable to recover it. 00:32:34.265 [2024-07-11 14:02:36.430452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.265 [2024-07-11 14:02:36.430632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.265 [2024-07-11 14:02:36.430662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.265 qpair failed and we were unable to recover it. 00:32:34.265 [2024-07-11 14:02:36.430901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.265 [2024-07-11 14:02:36.431145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.431186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.431370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.431605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.431635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.431792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.432018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.432049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.432278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.432446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.432475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.432646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.432946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.432976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.433196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.433496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.433527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.433841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.434014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.434029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.434123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.434331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.434347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.434487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.434770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.434801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.435098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.435237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.435268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.435426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.435592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.435622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.435781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.436027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.436057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.436310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.436534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.436565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.436786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.436999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.437029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.437233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.437361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.437376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.437615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.437863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.437894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.438198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.438414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.438428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.438578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.438797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.438827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.439000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.439273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.439308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.439519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.439821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.439851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.440007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.440180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.440212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.440484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.440611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.440641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.440946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.441148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.441213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.266 qpair failed and we were unable to recover it. 00:32:34.266 [2024-07-11 14:02:36.441440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.266 [2024-07-11 14:02:36.441559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.441589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.441738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.441957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.441987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.442201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.442315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.442345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.442512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.442727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.442757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.442880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.443152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.443189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.443363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.443664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.443699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.443880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.444131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.444169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.444312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.444496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.444528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.444749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.444971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.445000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.445316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.445600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.445630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.445775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.446014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.446044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.446198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.446473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.446503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.446662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.446883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.446913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.447126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.447381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.447412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.447658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.447895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.447926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.448123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.448209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.448251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.448413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.448636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.448666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.448960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.449073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.449089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.449279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.449487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.449502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.449717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.449957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.449987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.450152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.450367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.450397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.450599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.450826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.450855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.451031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.451247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.451277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.451494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.451759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.451774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.451879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.452023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.452036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.452288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.452384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.452399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.452533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.452648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.452662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.452780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.452963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.452978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.453117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.453331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.453362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.453585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.453736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.453766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.267 [2024-07-11 14:02:36.453935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.454183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.267 [2024-07-11 14:02:36.454198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.267 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.454370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.454561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.454576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.454700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.454915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.454944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.455193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.455356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.455386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.455677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.455812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.455827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.456076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.456229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.456259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.456547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.456829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.456858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.457068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.457231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.457263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.457422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.457627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.457657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.457984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.458141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.458182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.458458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.458674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.458704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.459016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.459220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.459263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.459382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.459534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.459563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.459785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.460020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.460050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.460270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.460463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.460493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.460707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.460872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.460902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.461180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.461382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.461397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.461574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.461791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.461820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.461999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.462236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.462266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.462500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.462771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.462801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.462949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.463150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.463169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.463282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.463544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.463559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.463750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.463922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.463937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.464075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.464294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.464325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.464500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.464675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.268 [2024-07-11 14:02:36.464705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.268 qpair failed and we were unable to recover it. 00:32:34.268 [2024-07-11 14:02:36.464932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.465068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.465098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.465322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.465589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.465624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.465857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.466066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.466096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.466369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.466486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.466501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.466682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.466902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.466932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.467221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.467463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.467493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.467704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.467937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.467967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.468172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.468440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.468456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.468591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.468867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.468897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.469118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.469451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.469481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.469662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.469878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.469907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.470127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.470291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.470321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.470516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.470721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.470751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.470891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.471177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.471207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.471366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.471584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.471614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.471857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.472077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.472106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.472360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.472565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.472580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.472832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.473103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.473132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.473318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.473570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.473600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.473906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.474193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.474225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.474432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.474677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.269 [2024-07-11 14:02:36.474707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.269 qpair failed and we were unable to recover it. 00:32:34.269 [2024-07-11 14:02:36.474873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.475048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.475079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.475305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.475513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.475543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.475758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.475986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.476016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.476162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.476383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.476413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.476710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.476930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.476960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.477123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.477446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.477477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.477776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.477997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.478026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.478178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.478405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.478434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.478587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.478803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.478833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.478988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.479168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.479185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.479367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.479666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.479696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.479865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.480008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.480036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.480196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.480372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.480402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.480630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.480805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.480820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.481012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.481149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.481188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.481417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.481561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.481591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.481799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.482005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.482035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.482175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.482441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.482471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.482696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.482898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.482928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.483100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.483279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.483309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.483587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.483735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.483764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.483992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.484145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.484165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.484433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.484654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.270 [2024-07-11 14:02:36.484684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.270 qpair failed and we were unable to recover it. 00:32:34.270 [2024-07-11 14:02:36.484907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.485124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.485153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.485428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.485539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.485569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.485808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.485970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.486000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.486197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.486388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.486418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.486649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.486788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.486818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.487032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.487328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.487359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.487582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.487730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.487759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.487999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.488147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.488185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.488422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.488550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.488585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.488804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.489078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.489109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.489342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.489563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.489593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.489802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.490018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.490048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.490201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.490348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.490378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.490540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.490824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.490855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.491131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.491369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.491384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.491483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.491698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.491728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.491954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.492116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.492145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.492368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.492512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.492547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.271 qpair failed and we were unable to recover it. 00:32:34.271 [2024-07-11 14:02:36.492727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.271 [2024-07-11 14:02:36.492924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.492954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.493245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.493484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.493514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.493693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.493856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.493886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.494187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.494343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.494372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.494579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.494901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.494930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.495175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.495472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.495501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.495778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.495982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.496012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.496181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.496419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.496448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.496688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.496962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.496992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.497170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.497327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.497357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.497634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.497869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.497899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.498203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.498413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.498443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.498575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.498872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.498902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.499059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.499185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.499201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.499422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.499549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.499564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.499811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.500016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.500045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.500353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.500613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.500642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.500805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.501025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.501055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.501327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.501468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.501482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.501665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.501814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.501843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.502061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.502259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.502274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.502548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.502714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.502744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.502974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.503171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.503201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.503547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.503726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.503757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.504083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.504255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.504286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.504494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.504687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.272 [2024-07-11 14:02:36.504716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.272 qpair failed and we were unable to recover it. 00:32:34.272 [2024-07-11 14:02:36.504965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.505236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.505266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.505528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.505753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.505783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.505940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.506244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.506275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.506445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.506675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.506690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.506971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.507122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.507152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.507448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.507669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.507685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.507943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.508167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.508197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.508481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.508634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.508664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.508966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.509281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.509312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.509471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.509765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.509794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.510025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.510150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.510192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.510358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.510591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.510606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.510889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.511130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.511170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.511328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.511545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.511574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.511750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.511967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.511997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.512207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.512436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.512471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.512689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.512904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.512934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.513212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.513434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.513463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.513767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.513941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.513971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.514207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.514421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.514450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.514752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.514989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.515019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.515296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.515406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.515436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.273 qpair failed and we were unable to recover it. 00:32:34.273 [2024-07-11 14:02:36.515658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.273 [2024-07-11 14:02:36.515771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.515801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.516020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.516179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.516211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.516364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.516610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.516640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.516796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.517016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.517046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.517267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.517514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.517543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.517689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.518004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.518034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.518257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.518394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.518424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.518598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.518744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.518758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.518953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.519154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.519254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.519478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.519750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.519779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.520090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.520367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.520383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.520459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.520545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.520560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.520741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.520948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.520977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.521126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.521418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.521433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.521598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.521778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.521808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.522018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.522178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.522208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.522517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.522671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.522701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.522926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.523132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.523168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.523392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.523542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.523572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.523869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.524035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.524064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.524350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.524484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.524513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.524680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.524852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.524881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.525053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.525208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.274 [2024-07-11 14:02:36.525239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.274 qpair failed and we were unable to recover it. 00:32:34.274 [2024-07-11 14:02:36.525469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.525748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.525763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.525934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.526144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.526180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.526344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.526508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.526523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.526767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.527030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.527045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.527183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.527369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.527384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.527558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.527694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.527709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.527831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.528099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.528117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.528320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.528448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.528463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.528584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.528838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.528854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.528983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.529181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.529199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.529414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.529600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.529615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.529806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.529928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.529943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.530130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.530322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.530340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.530549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.530726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.530741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.530866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.531052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.531067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.531310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.531434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.531449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.531693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.531817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.531832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.531946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.532076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.532090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.532288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.532554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.532568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.532768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.533024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.533038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.533226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.533347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.533362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.275 qpair failed and we were unable to recover it. 00:32:34.275 [2024-07-11 14:02:36.533502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.275 [2024-07-11 14:02:36.533771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.533789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.533908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.534030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.534044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.534190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.534309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.534324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.534457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.534652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.534667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.534785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.534966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.534982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.535179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.535314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.535328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.535505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.535648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.535663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.535872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.536065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.536080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.536266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.536407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.536421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.536625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.536734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.536749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.536944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.537141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.537178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.537278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.537410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.537425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.537714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.537840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.537855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.538090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.538265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.538280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.538390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.538529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.538544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.538662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.538849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.538864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.539053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.539169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.539185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.539490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.539667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.539683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.539876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.539994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.540009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.540117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.540372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.540386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.540516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.540712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.540726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.540912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.541151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.541180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.541371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.541546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.541561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.541816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.541955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.541970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.542084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.542227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.276 [2024-07-11 14:02:36.542242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.276 qpair failed and we were unable to recover it. 00:32:34.276 [2024-07-11 14:02:36.542516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.542778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.542793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.542983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.543171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.543186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.543308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.543426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.543442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.543622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.543814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.543829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.544041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.544165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.544180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.544364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.544491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.544505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.544781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.544963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.544978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.545231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.545445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.545460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.545716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.545843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.545859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.546065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.546198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.546213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.546417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.546590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.546605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.546811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.546992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.547007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.547197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.547370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.547385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.547629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.547763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.547779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.547967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.548234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.548250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.548438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.548639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.548654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.548827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.549008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.549024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.549151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.549290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.549306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.549442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.549557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.549573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.549689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.549908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.549922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.550040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.550174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.550190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.550364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.550538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.550552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.550763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.550938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.550952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.551136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.551386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.551402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.551616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.551824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.551840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.552092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.552283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.552298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.552502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.552727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.552742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.553010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.553184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.553201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.553325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.553445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.553459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.553584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.553698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.277 [2024-07-11 14:02:36.553713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.277 qpair failed and we were unable to recover it. 00:32:34.277 [2024-07-11 14:02:36.553835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.553955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.553970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.554171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.554311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.554326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.554466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.554644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.554658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.554839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.555026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.555042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.555181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.555311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.555327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.555513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.555761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.555778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.555902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.556097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.556117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.556256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.556379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.556396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.556672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.556801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.556816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.557048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.557202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.557218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.557401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.557692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.557707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.557900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.558111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.558126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.558392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.558501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.558517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.558702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.558811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.558825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.559078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.559324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.559340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.559609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.559810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.559825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.560012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.560187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.560202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.560346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.560588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.560604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.560741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.560932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.560947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.561074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.561288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.561303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.561417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.561560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.561574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.561833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.561974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.561989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.562111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.562244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.562259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.562376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.562508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.562524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.562647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.562785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.562801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.562995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.563114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.563129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.563265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.563390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.563406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.563674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.563913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.563928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.564120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.564236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.564251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.564391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.564519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.564534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.278 [2024-07-11 14:02:36.564666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.564840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.278 [2024-07-11 14:02:36.564855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.278 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.565042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.565168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.565184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.565306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.565441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.565456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.565590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.565823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.565838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.566012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.566204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.566220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.566400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.566577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.566591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.566769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.566907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.566921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.567037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.567141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.567155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.567336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.567471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.567486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.567728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.567867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.567883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.568074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.568316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.568332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.568449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.568592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.568607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.568703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.568814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.568829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.569007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.569194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.569209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.569444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.569644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.569659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.569783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.569985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.569999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.570125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.570260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.570276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.570544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.570742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.570757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.570945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.571052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.571067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.571246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.571361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.571376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.571516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.571693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.571708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.571902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.572096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.572111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.572310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.572458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.572473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.572661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.572838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.572855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.279 qpair failed and we were unable to recover it. 00:32:34.279 [2024-07-11 14:02:36.572998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.279 [2024-07-11 14:02:36.573175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.573190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.573471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.573662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.573677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.573799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.573996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.574011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.574135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.574319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.574338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.574430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.574613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.574628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.574895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.575078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.575093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.575276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.575472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.575487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.575665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.575779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.575794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.575920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.576200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.576215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.576408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.576627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.576642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.576820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.576952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.576967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.577148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.577438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.577453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.577643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.577818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.577833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.578044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.578169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.578184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.578325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.578562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.578577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.578820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.578954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.578969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.579151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.579267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.579283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.579421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.579528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.579543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.579737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.579983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.579998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.580132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.580269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.580285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.580409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.580651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.580666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.580874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.581073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.581089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.581223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.581332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.581347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.581475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.581672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.581687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.581879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.582054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.582069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.582270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.582467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.582482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.582649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.582822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.582836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.280 qpair failed and we were unable to recover it. 00:32:34.280 [2024-07-11 14:02:36.582977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.280 [2024-07-11 14:02:36.583218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.583234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.583410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.583670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.583686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.583813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.584003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.584017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.584215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.584429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.584444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.584575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.584741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.584757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.584870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.584976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.584991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.585180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.585367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.585382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.585499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.585620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.585635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.585930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.586056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.586072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.586185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.586367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.586381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.586520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.586649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.586679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.586889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.587050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.587079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.587294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.587574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.587603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.587825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.587977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.588007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.588175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.588395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.588425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.588635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.588796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.588826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.589002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.589219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.589251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.589465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.589754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.589784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.590009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.590240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.590270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.590424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.590620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.590650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.590862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.591011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.591042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.591317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.591536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.591566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.591771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.592040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.592071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.592199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.592334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.592364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.281 [2024-07-11 14:02:36.592589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.592761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.281 [2024-07-11 14:02:36.592776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.281 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.592957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.593110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.593140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.593309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.593537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.593568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.593738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.593880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.593915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.594080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.594352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.594383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.594611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.594812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.594827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.595025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.595191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.595222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.595380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.595586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.595616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.595838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.596101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.596133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.596386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.596672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.596705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.596946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.597097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.597127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.597290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.597563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.597592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.597869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.598009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.598039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.598219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.598424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.598461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.598702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.598914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.598945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.599116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.599346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.599377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.599528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.599755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.599785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.600068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.600225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.600255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.600417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.600689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.600719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.600967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.601240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.601271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.601576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.601778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.601808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.601966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.602179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.602210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.602457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.602760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.602790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.603001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.603157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.603194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.282 qpair failed and we were unable to recover it. 00:32:34.282 [2024-07-11 14:02:36.603363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.603486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.282 [2024-07-11 14:02:36.603501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.603774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.603881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.603911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.604030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.604336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.604367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.604526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.604710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.604740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.604861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.605011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.605041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.605211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.605438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.605468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.605717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.605860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.605891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.606048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.606255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.606285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.606534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.606779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.606809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.607038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.607244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.607275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.607501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.607820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.607849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.608007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.608244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.608276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.608548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.608728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.608758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.608987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.609147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.609197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.609352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.609567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.609597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.609818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.609978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.610007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.610254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.610478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.610508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.610809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.610960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.610990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.611113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.611414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.611446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.611604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.611847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.611877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.612095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.612316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.612347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.612493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.612779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.612810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.613084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.613238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.613269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.613430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.613652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.613682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.613826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.614033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.614062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.614222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.614382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.614413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.283 qpair failed and we were unable to recover it. 00:32:34.283 [2024-07-11 14:02:36.614556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.283 [2024-07-11 14:02:36.614691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.614721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.614953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.615193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.615223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.615381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.615584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.615613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.615840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.616079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.616108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.616346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.616576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.616605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.616774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.616941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.616975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.617198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.617470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.617501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.617669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.617798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.617813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.617978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.618178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.618209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.618437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.618583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.618612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.618820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.619001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.619031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.619273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.619499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.619529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.619736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.619943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.619973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.620194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.620419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.620448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.620694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.620905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.620941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.621246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.621414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.621444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.621613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.621769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.621798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.622066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.622339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.622369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.622667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.622797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.622826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.622978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.623133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.623172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.623434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.623602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.623617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.623797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.623917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.623932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.624119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.624374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.624389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.624578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.624706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.624736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.624951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.625101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.625132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.625321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.625470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.625500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.625709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.625851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.625880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.284 [2024-07-11 14:02:36.626093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.626267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.284 [2024-07-11 14:02:36.626302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.284 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.626599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.626849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.626878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.627111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.627338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.627370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.627536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.627797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.627828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.628118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.628397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.628428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.628649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.628855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.628884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.629135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.629397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.629428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.629637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.629822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.629851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.629997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.630241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.630272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.630567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.630864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.630894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.631052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.631206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.631236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.631384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.631609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.631624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.631786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.631984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.632014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.632242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.632410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.632439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.632671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.632852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.632882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.633034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.633314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.633345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.633498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.633662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.633692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.633919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.634146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.634187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.634375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.634577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.634606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.634767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.634976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.635010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.635237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.635450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.635480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.635699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.635857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.635899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.636083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.636260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.636309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.636545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.636835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.636861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.637081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.637280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.637303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.637500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.637640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.637656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.637838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.638044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.638074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.638265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.638446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.638476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.638694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.638974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.639004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.285 [2024-07-11 14:02:36.639182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.639328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.285 [2024-07-11 14:02:36.639357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.285 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.639566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.639830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.639860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.640083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.640324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.640356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.640567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.640831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.640864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.641013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.641234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.641265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.641512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.641790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.641819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.642094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.642312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.642344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.642573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.642750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.642780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.642947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.643093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.643123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.643391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.643618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.643653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.643822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.644028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.644058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.644269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.644477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.644506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.644805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.644899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.644914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.645111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.645290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.645322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.645520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.645800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.645815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.646028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.646182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.646197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.646307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.646416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.646430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.646567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.646738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.646768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.646994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.647267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.647299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.647460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.647690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.647720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.648020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.648238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.648253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.648432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.648678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.648707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.648945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.649100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.649130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.649358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.649657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.649686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.286 qpair failed and we were unable to recover it. 00:32:34.286 [2024-07-11 14:02:36.649899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.650198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.286 [2024-07-11 14:02:36.650229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.650406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.650677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.650707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.650939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.651107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.651138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.651385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.651600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.651630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.651860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.652109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.652138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.652322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.652529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.652559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.652735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.653004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.653019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.653130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.653310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.653343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.653587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.653745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.653775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.653998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.654106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.654141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.654317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.654543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.654573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.654729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.655023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.655053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.655285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.655559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.655589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.655865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.656107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.656137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.656300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.656514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.656544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.656774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.657066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.657096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.657323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.657624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.657654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.657929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.658085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.658115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.658301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.658557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.658587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.658813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.659088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.659118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.659275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.659522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.659552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.659811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.660032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.660061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.660201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.660498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.660529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.660802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.660910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.660940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.661094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.661242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.661277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.661424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.661567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.661596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.661869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.662193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.662224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.662499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.662659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.662690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.662936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.663095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.663126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.663280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.663583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.663613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.663813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.663976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.287 [2024-07-11 14:02:36.664007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.287 qpair failed and we were unable to recover it. 00:32:34.287 [2024-07-11 14:02:36.664223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.664363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.664393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.664620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.664838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.664868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.665085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.665357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.665400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.665533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.665728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.665758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.666053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.666226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.666256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.666380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.666536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.666571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.666726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.666945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.666976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.667199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.667471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.667502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.667728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.667899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.667914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.668048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.668226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.668243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.668375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.668495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.668509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.668797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.669062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.669078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.669189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.669313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.669329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.669602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.669851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.669866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.670054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.670177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.670193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.670376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.670510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.670525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.670658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.670793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.670809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.670893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.671015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.671028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.671203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.671321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.671336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.671514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.671699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.671714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.671828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.671961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.671976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.672112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.672259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.672275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.672420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.672713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.672743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.672911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.673208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.673239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.673407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.673570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.673600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.673771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.674062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.674076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.674272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.674463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.674477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.674659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.674776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.674791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.674903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.675105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.675120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.675306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.675511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.675525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.288 [2024-07-11 14:02:36.675792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.675980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.288 [2024-07-11 14:02:36.675995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.288 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.676199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.676330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.676345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.676486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.676629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.676659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.676820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.676961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.676991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.677148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.677373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.677403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.677698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.677912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.677941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.678173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.678387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.678417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.678628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.678838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.678868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.679035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.679142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.679157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.679348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.679599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.679629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.679789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.679991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.680021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.680232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.680444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.680474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.680679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.680805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.680837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.681068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.681338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.681370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.681586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.681789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.681819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.682038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.682231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.682262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.682567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.682868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.682899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.683121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.683367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.683397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.683674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.683881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.683911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.684082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.684333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.684364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.684596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.684809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.684838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.684968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.685179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.685210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.685511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.685663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.685693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.685850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.685971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.685986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.686229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.686317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.686332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.686526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.686748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.686778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.687024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.687260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.687296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.687524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.687756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.687770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.688087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.688386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.688417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.688587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.688740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.688770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.688880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.689147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.689176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.289 qpair failed and we were unable to recover it. 00:32:34.289 [2024-07-11 14:02:36.689309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.289 [2024-07-11 14:02:36.689383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.689398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.689664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.689850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.689865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.690006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.690192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.690207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.690478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.690702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.690732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.691010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.691229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.691260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.691481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.691767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.691803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.692108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.692350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.692381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.692682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.692885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.692915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.693175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.693505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.693535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.693694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.693869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.693898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.694154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.694376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.694405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.694630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.694912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.694942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.695153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.695381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.695412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.695624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.695781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.695811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.696082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.696193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.696225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.696474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.696676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.696707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.696911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.697100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.697130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.697437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.697719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.697749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.697922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.698136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.698150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.698440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.698599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.698628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.698848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.699030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.699059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.699218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.699347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.699376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.699530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.699753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.699783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.700007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.700155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.700197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.700509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.700657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.700687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.700939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.701129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.701144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.290 [2024-07-11 14:02:36.701277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.701470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.290 [2024-07-11 14:02:36.701500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.290 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.701719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.701937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.701979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.702223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.702402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.702416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.702621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.702842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.702873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.703093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.703284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.703314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.703526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.703678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.703708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.703932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.704151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.704169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.704369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.704572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.704603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.704811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.705062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.705092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.705320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.705470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.705500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.705720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.705903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.705933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.706172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.706460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.706490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.706628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.706870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.706885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.707064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.707322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.707353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.707531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.707682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.707711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.708006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.708175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.708206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.708483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.708698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.291 [2024-07-11 14:02:36.708728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.291 qpair failed and we were unable to recover it. 00:32:34.291 [2024-07-11 14:02:36.708931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.709197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.709230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.709456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.709685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.709714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.709930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.710175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.710191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.710314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.710466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.710496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.710647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.710949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.710979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.711149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.711344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.711375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.711589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.711738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.711768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.711938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.712155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.712175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.712358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.712608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.712623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.712750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.712863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.712877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.713010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.713257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.713288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.713400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.713638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.713668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.713966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.714118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.714148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.714407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.714645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.714679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.562 qpair failed and we were unable to recover it. 00:32:34.562 [2024-07-11 14:02:36.714890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.562 [2024-07-11 14:02:36.715107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.715137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.715337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.715585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.715615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.715893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.716064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.716093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.716394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.716634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.716664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.716820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.716948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.716977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.717254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.717470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.717500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.717727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.717986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.718017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.718269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.718514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.718544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.718819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.719101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.719131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.719351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.719503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.719533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.719740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.720010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.720039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.720199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.720375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.720406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.720626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.720845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.720875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.721087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.721273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.721304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.721557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.721719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.721749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.722030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.722269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.722300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.722466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.722689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.722719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.722889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.723182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.723214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.723448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.723615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.723630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.723824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.723984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.724014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.724248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.724452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.724481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.724700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.724852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.724867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.725141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.725334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.725366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.725531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.725682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.725698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.725901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.726194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.726226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.726439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.726734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.726748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.726941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.727184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.727212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.727374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.727542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.727572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.727745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.728015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.728058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.728183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.728376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.728406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.728688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.728904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.728934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.729212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.729326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.729341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.729458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.729678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.729707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.729850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.730004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.730034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.730170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.730420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.730449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.730613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.730817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.730847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.731099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.731222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.731237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.731504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.731696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.731725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.732001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.732217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.732249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.732549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.732776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.732806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.732919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.733194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.733226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.733393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.733567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.733597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.733806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.734021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.734050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.734351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.734571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.734600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.734844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.735143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.735180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.735400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.735620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.735651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.735885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.736090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.736120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.736448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.736670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.736700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.736908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.737206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.737236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.737515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.737685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.737715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.737961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.738258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.738300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.738446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.738716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.738745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.563 qpair failed and we were unable to recover it. 00:32:34.563 [2024-07-11 14:02:36.738921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.563 [2024-07-11 14:02:36.739084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.739114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.739450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.739664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.739693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.739915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.740215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.740246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.740455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.740730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.740760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.740937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.741084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.741114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.741358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.741562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.741591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.741799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.742004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.742035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.742185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.742401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.742430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.742640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.742951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.742981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.743140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.743478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.743508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.743717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.743929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.743959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.744132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.744284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.744314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.744488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.744758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.744788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.744952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.745180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.745195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.745464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.745596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.745611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.745898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.746179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.746210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.746378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.746599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.746629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.746856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.747072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.747102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.747355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.747649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.747678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.747892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.748154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.748196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.748404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.748636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.748666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.748828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.748988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.749003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.749296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.749504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.749534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.749702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.749861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.749891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.750119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.750246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.750276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.750437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.750708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.750737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.750967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.751185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.751215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.751540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.751815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.751845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.752094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.752331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.752362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.752589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.752834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.752864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.753137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.753391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.753422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.753722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.753991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.754022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.754320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.754441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.754471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.754775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.754992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.755008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.755203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.755401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.755430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.755737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.755895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.755925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.756151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.756310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.756339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.756566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.756721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.756736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.756844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.757049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.757080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.757290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.757451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.757480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.757714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.757930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.757960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.758239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.758409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.758438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.758617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.758773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.758803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.759022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.759136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.759175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.759412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.759629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.759659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.759934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.760141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.760181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.760391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.760560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.760590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.760836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.761087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.761117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.761368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.761543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.761572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.761829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.762118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.762154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.762415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.762642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.762672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.762848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.763098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.763127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.763294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.763436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.763466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.564 qpair failed and we were unable to recover it. 00:32:34.564 [2024-07-11 14:02:36.763769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.564 [2024-07-11 14:02:36.764050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.764079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.764297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.764508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.764537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.764765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.764969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.765000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.765217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.765400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.765430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.765659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.765953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.765993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.766115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.766223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.766238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.766376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.766566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.766601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.766832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.767103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.767133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.767295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.767506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.767535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.767811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.767954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.767984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.768131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.768347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.768378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.768543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.768780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.768810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.768936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.769148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.769171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.769359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.769534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.769548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.769750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.769821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.769836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.770081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.770235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.770267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.770493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.770709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.770739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.770902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.771174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.771204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.771419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.771630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.771660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.771877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.772012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.772041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.772261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.772565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.772593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.772730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.772882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.772912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.773137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.773314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.773345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.773578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.773727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.773757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.773960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.774232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.774263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.774490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.774626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.774655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.774812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.775033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.775063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.775347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.775567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.775596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.775769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.775918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.775949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.776177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.776348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.776378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.776600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.776747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.776777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.776995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.777209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.777240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.777578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.777865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.777895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.778152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.778421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.778436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.778621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.778758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.778774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.778950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.779084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.779098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.779298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.779561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.779575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.779819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.780024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.780039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.780223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.780473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.780488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.780682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.780820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.780835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.781081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.781280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.781295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.781486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.781665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.781680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.781790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.781917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.781933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.565 [2024-07-11 14:02:36.782128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.782303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.565 [2024-07-11 14:02:36.782318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.565 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.782511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.782727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.782742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.782915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.783127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.783142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.783352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.783525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.783540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.783664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.783918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.783933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.784117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.784301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.784316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.784444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.784686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.784701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.784823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.785016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.785031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.785322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.785502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.785517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.785654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.785835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.785851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.786045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.786264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.786279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.786382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.786506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.786521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.786698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.786949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.786964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.787091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.787278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.787293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.787538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.787725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.787742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.787935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.788125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.788140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.788264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.788397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.788411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.788524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.788645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.788660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.788781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.788908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.788923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.789212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.789413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.789428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.789669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.789846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.789862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.789998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.790121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.790136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.790252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.790432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.790448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.790687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.790808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.790824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.790949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.791129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.791145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.791344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.791455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.791468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.791592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.791807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.791821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.791996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.792123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.792138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.792317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.792465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.792480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.792749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.792994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.793010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.793202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.793384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.793399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.793651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.793895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.793910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.794171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.794409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.794423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.794542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.794668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.794684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.794805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.794927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.794942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.795186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.795376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.795391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.795663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.795795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.795810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.795940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.796142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.796156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.796296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.796571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.796586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.796763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.797026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.797040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.797254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.797383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.797397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.797572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.797791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.797806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.798014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.798190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.798205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.798344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.798534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.798549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.798678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.798810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.798825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.798991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.799226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.799241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.799412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.799601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.799615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.799885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.800077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.800092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.800230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.800361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.800376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.800495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.800633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.800647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.800757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.800901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.800916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.566 qpair failed and we were unable to recover it. 00:32:34.566 [2024-07-11 14:02:36.801088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.801216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.566 [2024-07-11 14:02:36.801232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.801349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.801521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.801536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.801730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.801971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.801986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.802176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.802361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.802376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.802498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.802769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.802784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.802995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.803255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.803270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.803481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.803669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.803684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.803881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.804006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.804022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.804202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.804334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.804349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.804470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.804658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.804676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.804883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.805079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.805094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.805204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.805468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.805485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.805663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.805791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.805806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.805934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.806151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.806171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.806350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.806529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.806548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.806681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.806861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.806876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.807130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.807310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.807326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.807456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.807635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.807650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.807777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.807954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.807969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.808202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.808470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.808485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.808755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.808940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.808955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.809131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.809268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.809285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.809473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.809701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.809716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.809806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.809895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.809910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.810085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.810266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.810281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.810419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.810688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.810703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.810893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.811080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.811094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.811239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.811339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.811355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.811476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.811670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.811685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.811818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.811998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.812013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.812154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.812354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.812369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.812549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.812661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.812676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.812858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.812981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.812997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.813190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.813464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.813478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.813609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.813803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.813819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.813931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.814188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.814203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.814368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.814481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.814496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.814699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.814820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.814835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.815028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.815169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.815184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.815311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.815524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.815539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.815721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.815828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.815843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.816035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.816156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.816177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.816318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.816530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.816544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.816667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.816855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.816870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.817072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.817256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.817271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.567 qpair failed and we were unable to recover it. 00:32:34.567 [2024-07-11 14:02:36.817363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1421170 is same with the state(5) to be set 00:32:34.567 [2024-07-11 14:02:36.817584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.567 [2024-07-11 14:02:36.817718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.817737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.817860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.817986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.818003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.818139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.818344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.818359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.818475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.818661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.818673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.818910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.819081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.819094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.819295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.819426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.819443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.819698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.819834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.819846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.820086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.820328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.820340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.820527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.820655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.820672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.820871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.820999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.821011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.821147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.821344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.821355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.821539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.821778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.821793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.822016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.822152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.822173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.822374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.822582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.822593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.822767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.822888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.822900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.823070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.823177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.823193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.823399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.823580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.823598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.823730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.823835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.823848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.824090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.824188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.824200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.824441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.824626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.824638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.824752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.824939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.824956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.825229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.825445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.825463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.825582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.825724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.825742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.825857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.826030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.826047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.826129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.826317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.826335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.826526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.826654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.826670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.826783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.827049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.827064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.827257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.827361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.827373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.827488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.827661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.827673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.827792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.827958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.827975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.828185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.828368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.828387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.828675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.828880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.828898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.829107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.829239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.829256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.829434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.829645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.829657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.829776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.829845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.829856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.830048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.830199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.830211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.830323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.830524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.830540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.830751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.830931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.830948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.831088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.831202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.831220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.831402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.831573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.831586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.831720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.831837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.831848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.832027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.832195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.832207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.832474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.832619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.832633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.832892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.833076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.833088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.833176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.833354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.833366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.833463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.833651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.833663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.833777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.833962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.833973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.834142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.834335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.834347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.834473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.834668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.834698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.568 qpair failed and we were unable to recover it. 00:32:34.568 [2024-07-11 14:02:36.834933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.835234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.568 [2024-07-11 14:02:36.835264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.835504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.835668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.835699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.835841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.836002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.836032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.836195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.836412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.836443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.836721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.836857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.836886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.837093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.837252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.837284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.837526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.837671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.837700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.837910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.838137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.838177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.838352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.838568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.838600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.838814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.839020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.839051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.839212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.839421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.839451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.839625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.839847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.839877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.840179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.840374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.840389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.840519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.840708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.840737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.840960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.841113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.841143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.841373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.841642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.841672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.841890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.842110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.842139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.842360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.842598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.842627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.842796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.843029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.843058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.843283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.843478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.843493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.843745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.843915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.843945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.844170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.844331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.844346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.844539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.844693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.844723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.844883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.845048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.845087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.845275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.845397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.845412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.845661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.845895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.845925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.846080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.846301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.846332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.846553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.846849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.846879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.847181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.847309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.847339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.847546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.847838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.847868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.848145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.848323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.848354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.848559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.848681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.848716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.848869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.849075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.849105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.849281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.849496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.849525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.849675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.849959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.849989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.850171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.850416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.850446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.850653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.850867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.850897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.851209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.851365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.851395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.851631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.851901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.851931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.852152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.852298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.852329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.852535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.852810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.852840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.853083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.853294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.853325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.853541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.853757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.853787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.853940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.854169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.854200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.854415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.854615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.854645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.854859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.855082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.855111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.855273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.855409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.855438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.855602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.855765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.855794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.856015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.856231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.856262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.569 qpair failed and we were unable to recover it. 00:32:34.569 [2024-07-11 14:02:36.856579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.569 [2024-07-11 14:02:36.856796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.856826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.857105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.857331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.857347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.857569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.857861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.857891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.858169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.858342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.858357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.858544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.858763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.858793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.859036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.859253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.859284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.859434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.859568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.859598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.859805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.860077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.860107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.860406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.860647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.860662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.860951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.861120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.861150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.861381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.861541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.861571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.861779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.862024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.862054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.862331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.862487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.862516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.862801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.863066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.863081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.863226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.863406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.863421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.863633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.863840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.863870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.864006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.864126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.864141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.864260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.864504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.864534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.864755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.864917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.864946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.865240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.865418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.865448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.865659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.865807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.865837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.866064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.866334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.866365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.866572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.866812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.866826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.867030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.867281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.867312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.867454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.867607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.867636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.867927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.868105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.868136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.868359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.868574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.868604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.868901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.869120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.869135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.869327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.869447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.869462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.869657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.869802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.869832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.870000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.870292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.870322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.870482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.870715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.870745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.870979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.871184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.871200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.871323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.871512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.871547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.871781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.872078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.872108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.872327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.872540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.872554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.872665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.872939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.872969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.873135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.873345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.873376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.873540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.873680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.873710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.873871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.874148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.874188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.874414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.874602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.874616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.874795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.874986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.875016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.875235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.875506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.875535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.875691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.875849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.875885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.570 qpair failed and we were unable to recover it. 00:32:34.570 [2024-07-11 14:02:36.876157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.570 [2024-07-11 14:02:36.876401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.876415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.876688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.876957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.876971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.877090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.877282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.877297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.877404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.877586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.877616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.877835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.878059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.878089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.878304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.878576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.878605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.878821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.878969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.878998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.879258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.879521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.879535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.879661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.879857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.879872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.879994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.880292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.880307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.880485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.880630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.880659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.880898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.881066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.881096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.881367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.881551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.881566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.881679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.881872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.881902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.882077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.882279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.882309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.882522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.882722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.882751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.882969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.883180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.883212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.883458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.883676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.883705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.883915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.884120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.884150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.884324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.884585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.884615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.884865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.885143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.885197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.885353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.885562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.885592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.885799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.886040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.886070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.886288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.886482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.886497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.886620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.886743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.886757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.886961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.887246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.887277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.887485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.887708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.887737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.888017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.888265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.888296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.888572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.888795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.888825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.889037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.889190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.889214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.889412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.889626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.889657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.889874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.890075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.890105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.890395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.890535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.890564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.890841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.891014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.891044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.891299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.891507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.891537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.891748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.891967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.891997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.892273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.892394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.892423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.892617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.892891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.892921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.893192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.893365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.893395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.893696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.893832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.893861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.894110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.894288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.894304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.894509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.894689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.894704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.894892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.895006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.895036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.895209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.895383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.895412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.895565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.895678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.895693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.895820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.895932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.895947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.896129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.896316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.896331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.896518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.896727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.896757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.896920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.897155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.897194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.897413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.897638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.897668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.897912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.898208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.898244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.898390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.898605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.898634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.571 [2024-07-11 14:02:36.898855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.899150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.571 [2024-07-11 14:02:36.899191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.571 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.899376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.899660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.899691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.900022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.900272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.900304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.900472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.900724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.900739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.900834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.901009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.901045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.901326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.901602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.901632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.901918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.902156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.902195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.902475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.902640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.902670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.902990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.903211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.903242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.903458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.903603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.903633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.903860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.904074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.904104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.904386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.904544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.904574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.904790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.904954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.904984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.905201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.905309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.905324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.905566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.905832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.905862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.906153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.906459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.906489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.906727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.906939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.906969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.907129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.907275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.907306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.907610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.907908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.907937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.908149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.908328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.908358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.908557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.908777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.908807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.908966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.909198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.909230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.909507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.909727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.909758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.909912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.910070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.910111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.910379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.910578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.910592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.910763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.910956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.910986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.911140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.911421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.911451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.911610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.911831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.911861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.912066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.912225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.912268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.912563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.912693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.912708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.912960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.913079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.913118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.913359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.913568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.913598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.913811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.914027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.914057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.914207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.914390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.914419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.914754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.914994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.915024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.915172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.915378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.915408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.915715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.915956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.915986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.916279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.916501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.916531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.916682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.916850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.916880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.917106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.917406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.917437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.917736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.917900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.917929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.918139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.918447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.918479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.918771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.918878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.918907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.919136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.919421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.919452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.919680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.919846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.919876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.920172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.920321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.920352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.920673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.920840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.920870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.921102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.921256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.921287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.921449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.921588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.921602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.572 [2024-07-11 14:02:36.921778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.921901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.572 [2024-07-11 14:02:36.921919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.572 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.922043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.922234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.922249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.922430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.922637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.922667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.922873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.923175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.923205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.923359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.923564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.923594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.923751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.924024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.924053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.924207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.924456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.924485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.924690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.924869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.924899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.925180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.925454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.925483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.925656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.925895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.925924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.926217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.926366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.926396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.926665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.926792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.926806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.926989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.927125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.927155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.927392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.927616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.927646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.927861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.928142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.928180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.928345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.928499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.928528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.928818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.929048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.929078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.929298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.929494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.929524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.929747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.930036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.930066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.930271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.930381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.930394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.930593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.930716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.930747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.931058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.931211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.931242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.931432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.931614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.931644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.931903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.932108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.932138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.932379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.932539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.932568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.932842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.933058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.933099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.933246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.933447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.933476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.933602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.933770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.933801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.933951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.934176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.934207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.934325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.934467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.934497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.934797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.935014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.935044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.935209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.935433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.935463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.935742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.935930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.935945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.936119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.936318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.936349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.936624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.936848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.936877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.937216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.937433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.937462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.937640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.937869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.937899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.938113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.938331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.938347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.938530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.938746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.938775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.938998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.939203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.939233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.939481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.939683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.939712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.939874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.940151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.940188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.940463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.940698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.940728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.940958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.941188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.941224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.573 qpair failed and we were unable to recover it. 00:32:34.573 [2024-07-11 14:02:36.941332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.941444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.573 [2024-07-11 14:02:36.941459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.941699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.941825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.941840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.941944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.942241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.942272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.942438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.942648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.942664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.942937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.943170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.943202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.943435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.943635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.943650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.943766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.943891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.943906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.944130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.944348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.944366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.944557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.944830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.944860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.945068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.945202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.945232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.945386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.945528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.945567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.945696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.945872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.945901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.946180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.946320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.946349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.946626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.946921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.946951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.947238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.947446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.947475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.947751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.947978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.948008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.948126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.948404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.948434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.948654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.948854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.948890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.949096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.949310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.949326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.949603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.949820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.949850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.950130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.950357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.950389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.950602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.950756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.950787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.951001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.951267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.951283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.951412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.951591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.951621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.951854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.952151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.952189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.952402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.952697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.952728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.952947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.953114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.953144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.953406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.953498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.953512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.953710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.953926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.953956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.954185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.954348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.954362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.954487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.954645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.954674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.954830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.955123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.955153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.955313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.955585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.955615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.955782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.955935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.955967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.956180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.956342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.956357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.956576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.956732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.956762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.956990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.957203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.957233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.957510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.957695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.957710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.957840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.958044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.958074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.958281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.958485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.958514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.958668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.958818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.958848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.959127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.959412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.959443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.959602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.959742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.959771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.959990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.960193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.960224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.960436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.960734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.960764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.961069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.961289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.961302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.961584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.961788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.961815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.961973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.962137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.962173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.574 [2024-07-11 14:02:36.962375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.962504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.574 [2024-07-11 14:02:36.962534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.574 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.962738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.962939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.962966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.963193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.963482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.963509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.963647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.963868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.963895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.964134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.964348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.964376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.964554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.964816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.964828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.965002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.982491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.982565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.982786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.983056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.983069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.983360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.983611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.983623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.983882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.984183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.984212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.984468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.984695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.984723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.984978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.985194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.985222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.985470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.985686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.985698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.985880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.998154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.998238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.998491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.998747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.998776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.998947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.999120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.999149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.999296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.999578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:36.999590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:36.999780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.000020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.000032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.000277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.000396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.000408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.000527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.000768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.000780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.000959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.001170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.001187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.001374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.001505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.001518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.001627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.001861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.001873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.002126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.002324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.002336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.002517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.002751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.002764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.002875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.003147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.003162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.003335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.003521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.003533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.003649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.003829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.003841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.003956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.004104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.004116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.004321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.004509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.004521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.004648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.004788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.004800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.004936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.005067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.005079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.005270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.005539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.005551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.005689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.005801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.005813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.006011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.006143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.575 [2024-07-11 14:02:37.006156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.575 qpair failed and we were unable to recover it. 00:32:34.575 [2024-07-11 14:02:37.006353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.006554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.006569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-07-11 14:02:37.006689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.006872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.006885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-07-11 14:02:37.007103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.007234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.007246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-07-11 14:02:37.007342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.007717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.007732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-07-11 14:02:37.007868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.008056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.008068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-07-11 14:02:37.008322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.008540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.008552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-07-11 14:02:37.008692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.008803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.008816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-07-11 14:02:37.009010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.009122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.009135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.837 [2024-07-11 14:02:37.009269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.009531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.837 [2024-07-11 14:02:37.009544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.837 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.009678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.009851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.009864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.010111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.010236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.010250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.010522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.010712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.010725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.010904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.011132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.011145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.011246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.011417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.011430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.011622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.011798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.011812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.011941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.012120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.012134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.012357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.012583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.012600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.012781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.012919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.012932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.013123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.013348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.013362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.013585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.013777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.013790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.013982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.014105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.014118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.014294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.014424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.014437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.014571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.014676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.014689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.014863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.014993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.015006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.015123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.015250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.015264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.015455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.015644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.015657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.015812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.015927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.015940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.016115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.016298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.016312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.016487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.016664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.016676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.016876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.017007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.017021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.017170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.017361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.017373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.017561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.017728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.017740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.017863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.018048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.018061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.018189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.018430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.018443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.018615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.018790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.018802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.018992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.019220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.019233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.019453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.019676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.019689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.019876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.020114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.020130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.020325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.020566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.020579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.838 qpair failed and we were unable to recover it. 00:32:34.838 [2024-07-11 14:02:37.020857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.838 [2024-07-11 14:02:37.021010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.021022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.021208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.021316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.021329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.021522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.021704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.021731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.021872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.022078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.022107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.022269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.022488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.022516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.022763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.023034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.023061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.023208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.023432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.023459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.023669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.023886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.023914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.024124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.024375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.024403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.024721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.024971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.024999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.025168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.025368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.025395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.025549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.025740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.025754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.025995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.026139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.026174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.026322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.026487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.026516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.026792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.027024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.027052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:34.839 qpair failed and we were unable to recover it. 00:32:34.839 [2024-07-11 14:02:37.027212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.027330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:34.839 [2024-07-11 14:02:37.027357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.316609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.316816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.316832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.316938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.317133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.317146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.317435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.317585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.317598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.317792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.318013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.318029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.318236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.318533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.318548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.318797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.318945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.318959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.319166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.319351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.319374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.319552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.319725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.319739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.319862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.320104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.320119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.320295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.320421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.320435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.320627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.320807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.320822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.321013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.321213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.321236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.321437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.321613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.321628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.321748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.321973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.322003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.322179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.322346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.322360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.322573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.322720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.322749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.323044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.323210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.323224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.323473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.323682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.323712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.323941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.324238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.324269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.168 qpair failed and we were unable to recover it. 00:32:35.168 [2024-07-11 14:02:37.324438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.324560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.168 [2024-07-11 14:02:37.324589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.324895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.325116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.325146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.325369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.325583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.325617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.325780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.326003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.326034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.326177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.326348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.326377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.326600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.326828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.326857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.327085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.327329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.327345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.327467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.327731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.327746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.327855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.328002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.328035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.328279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.328558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.328598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.328730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.328943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.328973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.329194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.329400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.329429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.329643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.329840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.329876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.330096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.330366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.330382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.330501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.330624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.330640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.330886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.331036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.331066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.331218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.331411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.331426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.331608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.331774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.331804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.331983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.332167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.332198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.332472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.332816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.332845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.333135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.333376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.333408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.169 qpair failed and we were unable to recover it. 00:32:35.169 [2024-07-11 14:02:37.333629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.333876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.169 [2024-07-11 14:02:37.333906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.334181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.334415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.334429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.334651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.334834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.334849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.335032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.335221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.335252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.335408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.335642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.335672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.335882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.336120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.336149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.336301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.336455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.336484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.336715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.336870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.336900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.337145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.337315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.337346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.337648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.337861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.337890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.338112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.338391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.338421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.338597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.338802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.338831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.339085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.339314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.339329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.339519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.339735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.339765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.339904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.340106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.340134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.340433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.340683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.340712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.340866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.341003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.341034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.341194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.341317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.341331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.341503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.341707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.341737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.342024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.342172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.342203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.342482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.342782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.342812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.170 qpair failed and we were unable to recover it. 00:32:35.170 [2024-07-11 14:02:37.343087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.170 [2024-07-11 14:02:37.343310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.343341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.343534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.343749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.343779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.343986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.344258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.344288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.344636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.344864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.344893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.345064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.345211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.345241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.345449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.345716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.345746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.345907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.346178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.346208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.346456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.346654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.346669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.346878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.347091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.347105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.347225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.347420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.347449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.347623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.347774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.347805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.348040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.348210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.348241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.348397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.348530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.348560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.348839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.349186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.349218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.349442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.349566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.349595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.349795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.350023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.350054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.350327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.350540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.350555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.350796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.350931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.350961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.351108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.351337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.351367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.351611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.351784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.351798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.352071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.352364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.171 [2024-07-11 14:02:37.352396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.171 qpair failed and we were unable to recover it. 00:32:35.171 [2024-07-11 14:02:37.352626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.352843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.352873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.353045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.353341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.353373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.353595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.353885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.353915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.354081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.354372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.354403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.354538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.354782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.354812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.355041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.355272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.355303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.355615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.355909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.355939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.356093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.356331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.356346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.356619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.356837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.356867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.357118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.357297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.357327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.357485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.357679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.357710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.357941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.358170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.358200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.358349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.358590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.358605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.358790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.358975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.359006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.359717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.360025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.360042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.360251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.360433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.360448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.360655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.360790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.360805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.361075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.361261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.361277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.361471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.361598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.361613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.361792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.361900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.361915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.362047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.362233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.172 [2024-07-11 14:02:37.362267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.172 qpair failed and we were unable to recover it. 00:32:35.172 [2024-07-11 14:02:37.362410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.362562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.362593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.362740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.362964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.362993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.363293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.363433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.363448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.363581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.363694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.363712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.363802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.364046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.364077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.364281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.364420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.364435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.364663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.364775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.364790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.365015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.365296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.365327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.365480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.365716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.365730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.365872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.366094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.366125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.366356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.366564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.366601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.366682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.366853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.366868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.366983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.367127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.367142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.367238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.367454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.367484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.367623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.367839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.367869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.367977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.368189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.368221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.368383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.368624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.368654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.368813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.368980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.369010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.369153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.369331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.369360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.369576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.369818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.369848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.173 [2024-07-11 14:02:37.370133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.370343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.173 [2024-07-11 14:02:37.370358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.173 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.370534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.370764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.370794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.371018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.371125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.371154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.371329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.371554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.371584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.371789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.372002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.372032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.372275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.372450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.372464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.372579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.372819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.372848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.373020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.373271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.373301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.373532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.373691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.373720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.374027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.374199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.374241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.374384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.374581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.374611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.374860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.375014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.375043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.375313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.375495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.375526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.375773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.375929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.375959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.376106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.376323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.376353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.376514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.376673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.376703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.174 qpair failed and we were unable to recover it. 00:32:35.174 [2024-07-11 14:02:37.376866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.174 [2024-07-11 14:02:37.377045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.377075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.377349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.377563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.377594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.377776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.377852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.377866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.378041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.378187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.378218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.378381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.378588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.378619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.378843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.379055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.379086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.379295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.379454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.379484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.379632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.379816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.379831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.379971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.380095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.380111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.380373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.380531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.380561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.380726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.380884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.380915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.381066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.381243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.381259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.381508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.381621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.381636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.381723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.381955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.381970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.382113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.382369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.382400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.382563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.382775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.382790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.382919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.383131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.383188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.383409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.383560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.383589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.383739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.383944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.383974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.384138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.384300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.384330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.384544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.384691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.384721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.385503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.385820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.385837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.386027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.386153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.386175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.386746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.386885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.386906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.387158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.387358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.387373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.387639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.387748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.387761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.388026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.388151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.388171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.388361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.388469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.388483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.388658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.388846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.388862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.175 qpair failed and we were unable to recover it. 00:32:35.175 [2024-07-11 14:02:37.388943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.175 [2024-07-11 14:02:37.389132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.389147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.389268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.389412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.389427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.389538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.389801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.389816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.389995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.390101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.390115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.390240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.390434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.390451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.390571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.390695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.390711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.390954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.391088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.391102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.391221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.391407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.391421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.391550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.391749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.391765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.391965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.392094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.392109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.392306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.392417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.392431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.392574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.392692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.392706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.392955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.393081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.393096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.393280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.393427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.393441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.393632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.393767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.393790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.393965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.394101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.394116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.394275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.394399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.394413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.394590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.394830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.394845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.395035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.395163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.395178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.395288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.395476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.395490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.395599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.395693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.395708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.395886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.396008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.396023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.396207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.396383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.396397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.396506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.396611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.396626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.396729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.396919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.396937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.397127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.397310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.397325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.397431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.397540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.397555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.397743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.397849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.397864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.398108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.398234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.398249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.398431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.398594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.398608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.398796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.398978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.398992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.176 qpair failed and we were unable to recover it. 00:32:35.176 [2024-07-11 14:02:37.399184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.176 [2024-07-11 14:02:37.399299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.399313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.399424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.399549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.399563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.399721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.399915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.399929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.400207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.400404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.400419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.400597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.400782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.400797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.400983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.401105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.401119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.401313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.401484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.401498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.401722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.401841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.401856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.402029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.402169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.402184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.402385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.402522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.402537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.402652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.402780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.402796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.402915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.403088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.403103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.403224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.403435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.403450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.403569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.403742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.403756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.403889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.404077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.404092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.404268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.404492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.404507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.404686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.404796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.404811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.404939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.405115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.405129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.405374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.405612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.405626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.405742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.405833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.405847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.405993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.406169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.406184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.406325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.406508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.406523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.406769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.406899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.406915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.407171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.407284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.407297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.407412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.407591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.407605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.407798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.407921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.407936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.408179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.408288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.408303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.408442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.408617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.408632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.408740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.408941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.408955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.409183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.409413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.409430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.409537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.409727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.177 [2024-07-11 14:02:37.409742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.177 qpair failed and we were unable to recover it. 00:32:35.177 [2024-07-11 14:02:37.409863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.410000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.410014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.410201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.410443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.410458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.410700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.410843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.410858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.410974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.411105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.411121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.411335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.411416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.411431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.411707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.411819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.411834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.412011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.412118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.412133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.412455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.412598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.412612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.412734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.412935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.412949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.413194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.413326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.413341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.413529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.413718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.413733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.413908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.414174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.414189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.414432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.414550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.414565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.414684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.414890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.414904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.415093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.415236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.415251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.415436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.415620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.415635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.415769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.416016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.416031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.416158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.416285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.416301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.416508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.416708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.416723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.416917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.417046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.417060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.417203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.417332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.417347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.417456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.417700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.417714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.417820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.417940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.417954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.418079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.418273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.418288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.418448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.418644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.418659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.418919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.419097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.419112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.178 [2024-07-11 14:02:37.419213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.419395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.178 [2024-07-11 14:02:37.419411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.178 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.419531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.419706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.419720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.419833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.419901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.419913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.420054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.420245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.420259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.420432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.420562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.420577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.420709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.420894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.420909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.421044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.421283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.421298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.421539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.421808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.421827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.422025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.422267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.422283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.422481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.422688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.422703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.422829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.423021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.423036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.423230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.423359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.423373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.423484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.423660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.423676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.423919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.424062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.424077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.424269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.424447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.424461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.424638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.424763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.424777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.424966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.425103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.425117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.425239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.425361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.425376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.425509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.425624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.425639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.425754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.425939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.425955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.426065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.426154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.426184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.426298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.426469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.426484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.426664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.426800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.426815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.426956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.427073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.427088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.427346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.427593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.427608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.427849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.427980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.427996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.428106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.428294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.428310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.428523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.428720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.428735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.428856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.428954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.428968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.429116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.429298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.429314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.429445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.429649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.429664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.179 qpair failed and we were unable to recover it. 00:32:35.179 [2024-07-11 14:02:37.429775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.179 [2024-07-11 14:02:37.429954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.429969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.430143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.430336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.430351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.430540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.430726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.430741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.430921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.431104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.431119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.431201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.431376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.431391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.431513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.431633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.431648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.431782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.431910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.431925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.432108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.432284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.432300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.432443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.432562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.432577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.432712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.432822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.432838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.433018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.433139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.433154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.433272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.433463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.433478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.433608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.433736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.433751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.433993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.434197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.434212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.434344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.434526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.434541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.434717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.434966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.434980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.435146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.435250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.435265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.435445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.435635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.435650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.435767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.435866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.435881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.436091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.436212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.436227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.436344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.436530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.436545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.436667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.436905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.436920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.437094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.437290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.437304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.437416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.437590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.437605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.437731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.437971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.437985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.438114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.438321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.438336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.438528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.438792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.438807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.439080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.439337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.439352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.439542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.439728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.439743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.439935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.440058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.440073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.440198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.440341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.440355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.180 qpair failed and we were unable to recover it. 00:32:35.180 [2024-07-11 14:02:37.440597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.180 [2024-07-11 14:02:37.440810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.440824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.440950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.441069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.441084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.441351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.441473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.441487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.441606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.441726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.441741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.441990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.442113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.442127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.442246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.442491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.442508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.442627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.442813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.442828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.442958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.443092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.443107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.443303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.443497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.443512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.443692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.443909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.443924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.444114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.444384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.444400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.444540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.444649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.444664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.444845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.445036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.445051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.445238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.445424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.445439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.445565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.445691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.445706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.445817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.445992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.446010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.446138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.446261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.446277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.446384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.446625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.446640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.446824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.447071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.447085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.447222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.447399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.447413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.447522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.447784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.447799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.447935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.448127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.448142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.448239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.448369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.448384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.448546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.448734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.448748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.448889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.449058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.449073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.449252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.449445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.449463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.449628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.449756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.449770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.449967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.450157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.450178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.450394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.450527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.450542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.450722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.450866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.450882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.451112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.451250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.181 [2024-07-11 14:02:37.451266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.181 qpair failed and we were unable to recover it. 00:32:35.181 [2024-07-11 14:02:37.451446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.451643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.451658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.451785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.451969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.451985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.452107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.452228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.452243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.452421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.452626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.452640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.452831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.452961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.452978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.453156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.453244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.453258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.453448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.453650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.453665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.453784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.453949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.453964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.454172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.454296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.454310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.454453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.454559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.454574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.454697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.454889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.454904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.455099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.455299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.455315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.455511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.455691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.455707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.455994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.456207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.456223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.456489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.456730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.456745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.456921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.457114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.457130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.457412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.457656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.457670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.457792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.458034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.458048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.458290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.458546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.458561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.458758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.458932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.458947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.459117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.459254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.459269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.459407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.459605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.459620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.459830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.460001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.460017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.460148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.460346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.460361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.460641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.460834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.460848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.461045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.461238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.461254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.461488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.461660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.461675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.461806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.461885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.461900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.462143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.462287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.462303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.462481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.462598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.462613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.182 [2024-07-11 14:02:37.462791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.462982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.182 [2024-07-11 14:02:37.462996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.182 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.463291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.463411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.463426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.463550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.463782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.463796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.463936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.464073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.464088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.464277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.464519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.464533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.464713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.464888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.464903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.465147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.465324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.465340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.465435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.465609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.465624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.465760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.465867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.465882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.465988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.466191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.466206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.466394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.466634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.466649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.466835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.467024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.467039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.467223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.467418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.467433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.467566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.467692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.467707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.467844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.467974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.467989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.468212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.468342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.468358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.468524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.468659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.468673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.468783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.468978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.468993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.469181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.469382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.469397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.469608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.469848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.469863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.470037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.470182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.470197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.470336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.470586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.470602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.470792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.470993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.471008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.471146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.471343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.471358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.471471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.471588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.471603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.471780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.471897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.471912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.472105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.472289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.472305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.183 qpair failed and we were unable to recover it. 00:32:35.183 [2024-07-11 14:02:37.472501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.472707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.183 [2024-07-11 14:02:37.472722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.472843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.473018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.473033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.473185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.473321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.473336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.473468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.473672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.473686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.473880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.473984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.473999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.474105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.474293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.474310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.474523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.474769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.474784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.474898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.475021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.475036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.475145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.475327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.475342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.475471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.475643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.475658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.475860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.476046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.476060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.476187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.476307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.476322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.476511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.476687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.476701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.476896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.477141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.477156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.477288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.477502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.477517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.477693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.477960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.477975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.478067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.478190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.478205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.478475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.478667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.478681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.478829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.478968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.478983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.479195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.479436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.479448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.479561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.479687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.479698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.479859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.479968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.479978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.480171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.480306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.480317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.480531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.480669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.480680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.480939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.481114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.481126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.481342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.481474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.481486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.481592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.481725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.481737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.481847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.481980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.481991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.482208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.482368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.482387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.482581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.482757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.482772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.482919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.483129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.483144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:35.184 qpair failed and we were unable to recover it. 00:32:35.184 [2024-07-11 14:02:37.483349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.184 [2024-07-11 14:02:37.483460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.483471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.483654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.483834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.483846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.483969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.484084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.484095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.484204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.484412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.484424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.484604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.484778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.484789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.484915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.485083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.485110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.485231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.485364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.485376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.485560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.485800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.485812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.486025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.486128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.486139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.486389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.486577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.486588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.486842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.487017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.487029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.487124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.487259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.487271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.487452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.487639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.487650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.487829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.488024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.488035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.488156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.488370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.488382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.488554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.488751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.488762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.489013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.489184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.489196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.489323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.489568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.489580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.489709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.489915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.489926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.490112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.490295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.490307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.490431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.490540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.490568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.490744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.490932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.490943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.491099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.491342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.491354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.491587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.491773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.491785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.491960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.492089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.492100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.492286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.492523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.492535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.492770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.492894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.492905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.493175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.493412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.493424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.493548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.493677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.493688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.493858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.493973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.493985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.494167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.494275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.185 [2024-07-11 14:02:37.494287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.185 qpair failed and we were unable to recover it. 00:32:35.185 [2024-07-11 14:02:37.494468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.494652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.494664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.494786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.494920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.494932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.495192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.495321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.495332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.495455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.495537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.495547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.495733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.495960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.495972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.496177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.496364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.496375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.496538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.496660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.496671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.496850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.496966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.496978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.497235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.497412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.497424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.497595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.497709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.497721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.497981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.498186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.498198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.498477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.498591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.498604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.498731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.498916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.498927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.499038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.499205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.499217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.499347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.499512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.499525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.499647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.499777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.499789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.500046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.500167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.500182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.500305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.500508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.500519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.500788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.500971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.500983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.501154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.501286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.501298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.501417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.501691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.501702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.501903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.502026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.502038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.502280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.502466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.502478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.502698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.502819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.502830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.502933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.503167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.503178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.503350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.503532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.503543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.503708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.503885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.503898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.504102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.504222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.504232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.504366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.504480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.504491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.504685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.504937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.504948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.186 [2024-07-11 14:02:37.505140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.505276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.186 [2024-07-11 14:02:37.505288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.186 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.505464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.505738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.505750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.505917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.506047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.506059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.506179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.506362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.506374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.506577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.506834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.506846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.507103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.507178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.507190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.507453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.507640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.507654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.507785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.507952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.507963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.508132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.508207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.508220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.508348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.508470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.508482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.508662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.508795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.508808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.509003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.509132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.509144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.509252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.509517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.509529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.509653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.509860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.509871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.510045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.510212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.510225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.510362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.510640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.510652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.510769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.510879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.510891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.511019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.511191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.511203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.511350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.511637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.511648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.511827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.511942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.511954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.512084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.512273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.512285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.512537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.512797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.512809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.512984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.513177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.513189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.513312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.513557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.513569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.513739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.513854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.513867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.514050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.514231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.514243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.514522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.514632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.514643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.514823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.515025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.187 [2024-07-11 14:02:37.515037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.187 qpair failed and we were unable to recover it. 00:32:35.187 [2024-07-11 14:02:37.515272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.515373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.515384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.515561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.515737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.515748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.515934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.516067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.516078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.516178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.516290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.516302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.516479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.516666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.516678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.516867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.517001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.517012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.517189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.517295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.517307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.517416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.517537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.517548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.517659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.517774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.517786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.517981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.518092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.518104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.518260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.518377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.518388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.518493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.518725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.518736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.518914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.519032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.519043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.519282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.519542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.519554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.519724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.519896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.519907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.520098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.520214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.520226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.520332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.520511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.520523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.520706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.520910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.520922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.521098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.521278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.521290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.521461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.521643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.521655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.521763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.521931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.521943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.522044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.522208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.522221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.522343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.522467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.522478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.522665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.522845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.522857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.522985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.523164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.523176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.523283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.523408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.523419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.523544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.523672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.523683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.523788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.523971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.523983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.524153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.524262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.524273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.524400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.524506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.524517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.188 [2024-07-11 14:02:37.524693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.524951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.188 [2024-07-11 14:02:37.524962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.188 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.525067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.525278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.525290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.525480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.525744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.525755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.525996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.526109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.526120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.526319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.526523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.526534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.526717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.526911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.526923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.527040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.527157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.527178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.527442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.527575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.527587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.527706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.527914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.527926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.528044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.528281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.528293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.528416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.528510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.528521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.528649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.528833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.528845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.528947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.529094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.529106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.529210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.529376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.529387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.529559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.529748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.529760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.529944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.530130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.530142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.530313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.530490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.530502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.530760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.530929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.530941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.531204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.531327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.531339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.531495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.531679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.531690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.531895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.532063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.532075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.532310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.532503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.532515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.532639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.532753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.532765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.532948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.533079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.533091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.533278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.533455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.533468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.533586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.533685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.533697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.533868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.534039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.534050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.534217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.534403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.534415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.534588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.534766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.534777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.534993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.535098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.535111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.535240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.535445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.535456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.189 qpair failed and we were unable to recover it. 00:32:35.189 [2024-07-11 14:02:37.535553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.189 [2024-07-11 14:02:37.535744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.535756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.535903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.536076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.536088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.536213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.536323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.536334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.536508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.536657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.536669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.536846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.536963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.536975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.537170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.537339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.537351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.537466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.537703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.537715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.537904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.538009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.538021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.538189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.538371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.538383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.538517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.538695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.538707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.538919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.539045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.539057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.539174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.539378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.539390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.539594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.539696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.539708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.539943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.540114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.540126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.540318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.540430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.540442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.540565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.540802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.540814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.540946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.541124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.541136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.541253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.541421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.541432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.541652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.541822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.541833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.541998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.542121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.542133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.542310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.542480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.542492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.542600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.542834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.542845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.543023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.543141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.543153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.543355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.543472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.543484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.543668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.543805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.543816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.543935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.544101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.544112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.544379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.544560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.544571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.544688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.544790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.544802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.544879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.545069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.545080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.545247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.545353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.545364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.545560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.545681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.545693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.190 qpair failed and we were unable to recover it. 00:32:35.190 [2024-07-11 14:02:37.545897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.190 [2024-07-11 14:02:37.546164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.546175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.546289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.546410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.546421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.546623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.546813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.546825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.547060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.547177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.547190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.547450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.547644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.547656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.547862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.547978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.547990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.548126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.548235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.548247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.548363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.548548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.548560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.548700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.548935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.548947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.549113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.549267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.549279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.549410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.549529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.549541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.549661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.549849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.549860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.550098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.550216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.550228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.550419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.550582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.550594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.550717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.550889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.550901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.551167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.551351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.551363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.551503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.551625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.551636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.551844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.551967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.551981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.552149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.552419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.552432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.552632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.552800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.552811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.553024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.553196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.553208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.553419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.553564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.553575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.553706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.553882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.553894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.554076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.554199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.554212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.554343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.554631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.554643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.554771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.554942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.554954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.555078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.555201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.191 [2024-07-11 14:02:37.555214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.191 qpair failed and we were unable to recover it. 00:32:35.191 [2024-07-11 14:02:37.555391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.555505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.555519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.555703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.555838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.555850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.556091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.556280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.556292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.556553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.556674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.556685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.556927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.557119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.557131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.557318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.557510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.557522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.557716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.557816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.557828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.557932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.558192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.558204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.558373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.558631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.558642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.558778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.558957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.558969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.559150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.559415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.559429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.559600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.559765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.559777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.560040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.560202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.560215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.560426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.560611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.560623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.560741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.560963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.560975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.561163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.561369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.561381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.561553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.561760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.561779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.562059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.562262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.562281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.562488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.562676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.562694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.562819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.562958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.562975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.563190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.563370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.563387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.563489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.563713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.563725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.563912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.564120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.564137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.564378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.564506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.564524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.564775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.564964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.564976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.565154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.565337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.565349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.565536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.565650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.565662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.565928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.566054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.566071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.566324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.566463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.566475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.566737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.566995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.567010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.567180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.567349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.192 [2024-07-11 14:02:37.567367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.192 qpair failed and we were unable to recover it. 00:32:35.192 [2024-07-11 14:02:37.567523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.567657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.567676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.567878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.568117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.568129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.568312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.568494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.568507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.568628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.568809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.568828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.569041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.569323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.569338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.569574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.569759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.569768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.569895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.570069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.570084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.570337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.570566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.570579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.570768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.571003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.571013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.571197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.571385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.571400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.571697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.571892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.571905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.572192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.572304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.572314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.572558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.572796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.572806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.573073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.573220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.573236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.573435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.573685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.573698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.574016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.574217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.574228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.574418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.574543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.574557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.574829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.575035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.575047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.575306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.575427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.575436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.575646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.575897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.575911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.576213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.576407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.576417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.576534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.576720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.576730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.576959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.577089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.577098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.577323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.577498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.577513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.577715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.577881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.577891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.578154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.578441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.578453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.578630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.578878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.578891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.579084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.579313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.579323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.579538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.579783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.579797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.580013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.580310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.580323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.193 qpair failed and we were unable to recover it. 00:32:35.193 [2024-07-11 14:02:37.580543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.580724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.193 [2024-07-11 14:02:37.580734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.580972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.581167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.581182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.581338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.581541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.581553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.581808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.582045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.582054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.582274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.582502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.582518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.582765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.582949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.582958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.583193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.583325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.583335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.583582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.583806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.583819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.584101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.584331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.584341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.584606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.584859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.584874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.585009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.585177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.585187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.585394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.585626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.585636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.585914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.586171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.586188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.586390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.586581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.586591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.586796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.587014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.587023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.587283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.587549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.587563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.587856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.588171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.588181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.588466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.588670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.588683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.589015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.589304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.589313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.589553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.589768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.589784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.589909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.590117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.590126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.590380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.590509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.590518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.590691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.590974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.590987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.591182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.591380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.591390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.591506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.591604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.591613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.591876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.592139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.592155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.592319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.592508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.592518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.592694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.592970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.592984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.593240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.593449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.593461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.593704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.593931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.593944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.594189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.594414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.594431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.194 qpair failed and we were unable to recover it. 00:32:35.194 [2024-07-11 14:02:37.594641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.194 [2024-07-11 14:02:37.594909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.594923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.595111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.595222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.595236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.595484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.595669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.595682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.595970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.596180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.596193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.596334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.596518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.596531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.596723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.596937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.596950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.597138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.597422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.597436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.597576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.597775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.597788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.597902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.598084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.598096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.598377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.598515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.598528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.598739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.598913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.598925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.599049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.599238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.599251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.599457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.599579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.599592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.599782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.600020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.600033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.600216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.600392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.600405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.600524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.600715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.600728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.601002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.601291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.601305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.601590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.601788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.601801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.601981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.602222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.602235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.602449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.602578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.602591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.602798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.602972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.602985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.603172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.603414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.603427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.603610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.603737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.603750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.603939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.604059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.604073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.604303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.604564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.604578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.195 [2024-07-11 14:02:37.604728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.604976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.195 [2024-07-11 14:02:37.604989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.195 qpair failed and we were unable to recover it. 00:32:35.468 [2024-07-11 14:02:37.605202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 14:02:37.605382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 14:02:37.605397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.468 qpair failed and we were unable to recover it. 00:32:35.468 [2024-07-11 14:02:37.605599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 14:02:37.605837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 14:02:37.605850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.468 qpair failed and we were unable to recover it. 00:32:35.468 [2024-07-11 14:02:37.605971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.468 [2024-07-11 14:02:37.606252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.606265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.606532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.606666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.606679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.606794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.607052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.607065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.607312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.607484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.607497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.607682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.607898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.607911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.608047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.608311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.608325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.608458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.608640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.608653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.608946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.609129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.609142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.609291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.609431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.609444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.609669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.609896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.609909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.610197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.610377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.610390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.610524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.610713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.610726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.610843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.611018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.611032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.611221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.611407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.611419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.611603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.611845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.611858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.612098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.612351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.612365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.612473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.612679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.612692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.612940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.613148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.613165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.613266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.613481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.613494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.613619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.613821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.613834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.614028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.614137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.614150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.614408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.614583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.614596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.614729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.614886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.614898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.615105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.615287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.615301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.615558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.615783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.615796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.615920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.616162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.616175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.616380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.616556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.616569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.616829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.617062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.617075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.617285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.617525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.617538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.617794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.617977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.617990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.618254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.618393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.618406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.618599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.618853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.618869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.619046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.619288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.619302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.619439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.619574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.619587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.619701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.619883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.619896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.620018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.620165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.620178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.620356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.620479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.620492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.620677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.620873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.620886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.621001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.621196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.621209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.621456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.621629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.621642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.621953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.622234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.622247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.622494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.622681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.622697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.622882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.623157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.623187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.623393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.623523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.623535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.623713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.623840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.623853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.624071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.624250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.624264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.624457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.624668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.624681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.624918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.625182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.625195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.625369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.625568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.625580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.625710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.625886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.625899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.469 qpair failed and we were unable to recover it. 00:32:35.469 [2024-07-11 14:02:37.626079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.626296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.469 [2024-07-11 14:02:37.626325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.626519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.626828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.626862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.627078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.627352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.627381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.627611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.627830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.627859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.628168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.628310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.628323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.628541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.628700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.628727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.628898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.629172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.629185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.629325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.629455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.629468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.629662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.629849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.629877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.630121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.630423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.630452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.630697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.630985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.631014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.631267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.631416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.631450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.631689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.631982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.632016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.632203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.632398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.632412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.632689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.632993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.633022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.633345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.633479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.633491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.633692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.633855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.633883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.634089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.634222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.634252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.634444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.634743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.634771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.635101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.635309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.635338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.635520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.635649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.635661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.635842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.636114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.636127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.636354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.636569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.636598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.636884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.637123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.637152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.637341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.637574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.637602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.637854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.638097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.638133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.638376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.638498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.638526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.638838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.639069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.639097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.639330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.639629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.639658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.639891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.640059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.640087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.640351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.640540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.640567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.640804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.641073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.641102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.641302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.641604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.641634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.641800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.642022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.642051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.642343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.642564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.642576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.642833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.643031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.643059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.643373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.643600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.643628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.643872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.644105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.644136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.644377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.644563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.644592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.644895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.645115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.645144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.645311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.645488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.645516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.645693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.645907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.645935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.646117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.646373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.646387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.646633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.646964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.646994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.647241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.647479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.647507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.647736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.648018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.648047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.648221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.648434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.648462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.648613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.648788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.648817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.649029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.649285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.649314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.649552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.649706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.649735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.649976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.650266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.650279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.470 qpair failed and we were unable to recover it. 00:32:35.470 [2024-07-11 14:02:37.650531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.470 [2024-07-11 14:02:37.650763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.650792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.651095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.651320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.651349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.651618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.651935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.651964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.652191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.652337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.652351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.652549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.652696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.652709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.652915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.653162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.653177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.653402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.653661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.653674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.653798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.653984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.653997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.654137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.654485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.654515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.654751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.654961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.654974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.655236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.655464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.655492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.655732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.655973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.656002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.656233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.656511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.656540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.656859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.657007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.657020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.657240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.657385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.657398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.657643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.657858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.657887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.658170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.658382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.658395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.658594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.658912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.658941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.659243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.659458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.659471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.659618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.659807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.659836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.660129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.660312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.660343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.660571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.660892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.660920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.661199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.661402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.661431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.661679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.661928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.661957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.662200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.662368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.662396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.662653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.662974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.663002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.663222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.663514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.663542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.663751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.663915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.663944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.664176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.664337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.664366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.664620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.664775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.664803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.665005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.665183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.665213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.665481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.665722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.665750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.665969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.666213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.666243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.666463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.666665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.666694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.666928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.667241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.667270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.667422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.667626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.667655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.667879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.668082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.668110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.668397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.668555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.668583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.668863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.669017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.669030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.669219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.669354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.669382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.669598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.669770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.669799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.670034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.670190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.670220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.670421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.670631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.670660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.670846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.671006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.471 [2024-07-11 14:02:37.671035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.471 qpair failed and we were unable to recover it. 00:32:35.471 [2024-07-11 14:02:37.671302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.671521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.671549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.671719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.671867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.671896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.672040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.672256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.672286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.672474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.672650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.672679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.672866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.673010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.673039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.673258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.673393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.673406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.673605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.673802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.673830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.673995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.674213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.674243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.674420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.674534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.674547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.674820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.674973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.675002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.675147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.675392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.675422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.675703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.675869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.675898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.676191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.676305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.676318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.676443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.676633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.676646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.676782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.676988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.677016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.677142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.677279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.677309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.677450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.677621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.677635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.677816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.677925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.677938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.678067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.678280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.678294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.678471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.678644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.678657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.678779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.679043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.679057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.679266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.679438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.679451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.679636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.679768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.679782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.679899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.680069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.680082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.680257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.680386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.680399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.680580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.680778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.680792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.680933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.681120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.681134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.681258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.681398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.681412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.681567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.681681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.681695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.681881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.682012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.682028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.682228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.682336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.682349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.682464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.682650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.682663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.682794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.682900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.682913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.683069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.683190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.683204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.683392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.683526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.683539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.683733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.683860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.683873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.683995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.684187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.684200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.684344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.684473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.684486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.684593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.684705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.684717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.684923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.685103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.685117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.685324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.685464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.685476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.685680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.685980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.685993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.686113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.686295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.686309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.686516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.686632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.686645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.686856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.686994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.687008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.687128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.687324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.687337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.687446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.687581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.687594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.687721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.687834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.687849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.687967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.688080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.688092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.688221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.688396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.688408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.688535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.688669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.472 [2024-07-11 14:02:37.688682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.472 qpair failed and we were unable to recover it. 00:32:35.472 [2024-07-11 14:02:37.688802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.688930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.688943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.689058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.689168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.689182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.689359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.689533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.689546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.689727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.689852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.689865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.689980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.690083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.690096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.690220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.690328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.690340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.690467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.690687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.690703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.690814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.690949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.690962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.691092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.691225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.691239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.691363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.691536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.691549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.691661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.691774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.691787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.691899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.692022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.692035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.692204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.692323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.692335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.692435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.692576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.692589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.692766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.692886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.692899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.693015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.693199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.693212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.693456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.693563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.693578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.693824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.693937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.693950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.694057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.694269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.694282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.694540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.694678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.694706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.694861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.694996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.695024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.695233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.695394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.695422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.695598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.695775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.695804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.695956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.696096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.696125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.696300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.696548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.696577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.696866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.696997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.697026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.697183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.697379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.697408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.697567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.697737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.697766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.697885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.698043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.698056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.698171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.698289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.698302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.698443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.698712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.698741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.698965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.699098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.699126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.699319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.699466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.699495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.699648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.699793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.699822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.700038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.700320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.700359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.700483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.700599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.700612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.700721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.700842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.700855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.701032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.701153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.701170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.701346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.701450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.701479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.701691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.701919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.701958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.702083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.702195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.702208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.702292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.702391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.702404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.702603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.702744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.702773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.702888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.703032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.703061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.703270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.703407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.703436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.703757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.703903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.703932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.704122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.704331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.704362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.704641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.704798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.704827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.705104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.705260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.705289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.705545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.705700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.705728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.705886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.706045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.473 [2024-07-11 14:02:37.706058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.473 qpair failed and we were unable to recover it. 00:32:35.473 [2024-07-11 14:02:37.706175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.706361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.706374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.706562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.706803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.706832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.706987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.707129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.707157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.707310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.707420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.707432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.707560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.707765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.707793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.708006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.708153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.708171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.708322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.708475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.708503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.708813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.709031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.709060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.709280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.709525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.709554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.709727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.709991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.710019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.710337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.710506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.710535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.710775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.711122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.711150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.711327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.711520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.711548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.711789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.712022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.712050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.712353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.712563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.712577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.712792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.713037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.713069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.713309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.713513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.713541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.713678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.713948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.713976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.714304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.714475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.714503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.714733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.714887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.714915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.715068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.715291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.715321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.715547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.715709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.715738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.716045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.716257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.716270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.716499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.716696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.716725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.717058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.717281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.717311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.717552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.717709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.717738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.717953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.718099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.718112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.718241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.718369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.718382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.718559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.718809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.718837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.719060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.719338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.719367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.719597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.719755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.719784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.720006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.720227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.720241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.720372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.720552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.720565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.720750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.720971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.721000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.721237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.721550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.721580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.721729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.722042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.722071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.722359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.722693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.722722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.722959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.723221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.723251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.723514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.723725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.723753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.724074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.724282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.724295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.724545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.724864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.724892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.725201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.725414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.725427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.725581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.725807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.725836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.726066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.726328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.726358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.726624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.726921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.726949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.474 qpair failed and we were unable to recover it. 00:32:35.474 [2024-07-11 14:02:37.727278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.727432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.474 [2024-07-11 14:02:37.727461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.727682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.727996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.728033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.728262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.728453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.728483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.728707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.729026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.729055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.729333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.729498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.729511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.729756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.729978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.730008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.730181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.730391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.730404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.730688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.730999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.731029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.731366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.731536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.731565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.731721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.732028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.732058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.732229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.732485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.732498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.732645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.732791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.732821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.733094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.733316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.733346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.733502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.733689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.733718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.733886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.734050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.734079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.734363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.734605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.734634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.734790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.735044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.735074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.735290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.735523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.735537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.735732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.735956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.735984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.736132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.736372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.736402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.736582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.736716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.736729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.736934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.737132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.737185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.737361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.737634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.737664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.737905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.738126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.738155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.738444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.738658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.738686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.739010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.739229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.739260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.739421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.739564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.739577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.739780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.740025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.740054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.740225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.740492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.740505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.740695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.740833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.740846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.741030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.741206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.741220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.741451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.741663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.741692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.741927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.742195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.742225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.742385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.742600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.742630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.742801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.743029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.743058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.743287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.743422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.743436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.743681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.743918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.743947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.744224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.744433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.744446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.744576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.744686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.744701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.744912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.745065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.745094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.745387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.745554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.745582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.745752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.745915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.745944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.746180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.746376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.746389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.746578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.746890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.746919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.747146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.747326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.747355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.747589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.747757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.747786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.747991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.748304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.748335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.748568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.748794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.748823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.749045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.749313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.749343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.749647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.749820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.749850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.750085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.750370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.750384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.475 qpair failed and we were unable to recover it. 00:32:35.475 [2024-07-11 14:02:37.750518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.750731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.475 [2024-07-11 14:02:37.750760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.751053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.751255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.751293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.751528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.751711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.751740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.751963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.752259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.752289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.752458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.752641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.752669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.752925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.753206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.753237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.753376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.753568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.753597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.753866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.754108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.754136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.754327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.754494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.754523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.754683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.754952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.754980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.755189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.755367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.755396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.755574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.755727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.755755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.755981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.756329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.756359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.756593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.756746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.756775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.756952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.757180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.757209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.757377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.757642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.757670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.757895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.758222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.758252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.758481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.758774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.758811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.759112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.759341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.759372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.759531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.759729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.759757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.760035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.760306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.760336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.760575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.760833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.760846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.761117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.761373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.761403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.761693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.761943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.761972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.762206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.762429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.762458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.762734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.762939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.762967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.763226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.763474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.763503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.763821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.764141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.764182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.764394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.764614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.764644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.764885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.765129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.765158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.765493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.765655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.765684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.765912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.766133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.766184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.766386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.766517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.766546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.766772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.766916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.766945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.767237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.767478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.767491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.767755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.768049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.768077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.768230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.768431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.768460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.768632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.768942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.768971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.769285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.769514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.769543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.769741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.771027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.771053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.771348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.771592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.771606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.771832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.771976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.771989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.772176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.772421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.772450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.772674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.773004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.773033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.773290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.773547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.773575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.773916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.774138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.774191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.476 qpair failed and we were unable to recover it. 00:32:35.476 [2024-07-11 14:02:37.774433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.476 [2024-07-11 14:02:37.774654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.774682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.775053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.775341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.775372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.775535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.775905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.775934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.776157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.776403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.776432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.776661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.776894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.776924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.777234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.777405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.777418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.777569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.777794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.777823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.778062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.778331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.778345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.778477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.778666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.778696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.778876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.779113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.779154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.779397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.779591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.779604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.779898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.780063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.780091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.780248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.780474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.780503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.780661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.780934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.780963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.781263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.781535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.781563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.781734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.782009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.782043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.782245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.782453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.782481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.782635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.782817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.782831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.783074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.783198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.783212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.783397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.783641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.783669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.783837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.784133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.784171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.784418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.784688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.784716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.785020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.785354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.785384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.785616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.785871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.785900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.786067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.786284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.786314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.786548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.786919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.786952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.787255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.787527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.787555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.787777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.787923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.787952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.788205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.788451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.788479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.788776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.788969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.788982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.789203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.789351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.789363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.789497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.789630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.789643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.789904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.790203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.790234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.790487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.790768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.790781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.791058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.791218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.791247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.791545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.791779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.791794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.792062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.792364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.792393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.792695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.792944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.792973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.793300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.793527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.793556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.793774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.793940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.793968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.794213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.794490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.794519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.794675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.794911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.794924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.795110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.795363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.795393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.795708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.795940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.795969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.796205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.796451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.796480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.796775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.797040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.797073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.797312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.797473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.797501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.797751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.797969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.797997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.798249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.798465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.798493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.798731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.798934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.798963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.799210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.799506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.799519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.477 qpair failed and we were unable to recover it. 00:32:35.477 [2024-07-11 14:02:37.799644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.477 [2024-07-11 14:02:37.799797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.799826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.800036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.800332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.800346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.800489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.800735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.800763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.800989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.801297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.801310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.801523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.801749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.801777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.802084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.802333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.802365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.802604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.802782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.802811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.803063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.803270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.803283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.803532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.803806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.803834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.804134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.804333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.804346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.804569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.804843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.804872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.805042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.805329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.805358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.805624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.805779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.805807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.806066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.806285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.806315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.806639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.806817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.806846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.807025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.807270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.807283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.807482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.807703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.807731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.807902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.808063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.808093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.808301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.808492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.808520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.808666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.808894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.808923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.809216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.809551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.809579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.809887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.810124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.810153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.810484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.810730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.810758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.811019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.811336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.811365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.811709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.812004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.812032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.812334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.812564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.812577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.812768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.812941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.812954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.813082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.813356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.813369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.813617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.813808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.813821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.814015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.814279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.814293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.814545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.814758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.814771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.815091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.815386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.815399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.815586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.815713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.815726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.816034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.816274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.816287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.816532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.816748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.816760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.817062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.817322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.817336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.817601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.817785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.817798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.818065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.818200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.818213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.818411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.818619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.818631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.818890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.819077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.819090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.819362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.819554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.819567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.819767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.820032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.820045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.820251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.820378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.820391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.820569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.820712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.820725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.820920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.821096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.821110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.821380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.821669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.821682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.821958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.822250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.822263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.822476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.822648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.822661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.822889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.823101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.823114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.823364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.823616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.823630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.823923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.824174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.824192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.478 [2024-07-11 14:02:37.824328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.824517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.478 [2024-07-11 14:02:37.824530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.478 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.824725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.824918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.824931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.825137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.825460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.825473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.825742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.826005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.826018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.826319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.826583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.826596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.826746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.826979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.826992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.827263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.827466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.827479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.827679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.827788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.827801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.828069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.828273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.828287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.828409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.828606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.828619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.828802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.828995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.829007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.829198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.829393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.829406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.829693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.829900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.829913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.830207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.830394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.830406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.830654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.830893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.830906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.831156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.831443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.831457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.831635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.831846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.831858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.832069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.832362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.832376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.832524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.832739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.832751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.832997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.833256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.833269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.833449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.833594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.833606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.833890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.834170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.834183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.834379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.834595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.834607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.834830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.835046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.835059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.835270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.835481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.835498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.835694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.835964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.835977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.836284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.836481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.836495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.836737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.837015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.837028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.837227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.837441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.837454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.837726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.837919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.837931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.838040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.838297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.838313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.838527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.838767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.838780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.838900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.839083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.839096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.839366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.839570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.839584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.839873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.840104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.840118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.840362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.840576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.840588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.840837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.841027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.841040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.841359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.841632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.841645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.841906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.842153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.842171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.842349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.842612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.842625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.842800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.842986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.842999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.843188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.843480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.843493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.843697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.843968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.843981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.844251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.844525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.844538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.844808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.844947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.844963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.845222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.845475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.479 [2024-07-11 14:02:37.845488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.479 qpair failed and we were unable to recover it. 00:32:35.479 [2024-07-11 14:02:37.845669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.845935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.845948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.846189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.846449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.846462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.846600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.846859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.846873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.847050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.847265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.847279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.847456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.847722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.847735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.847993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.848227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.848240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.848414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.848609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.848622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.848892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.849079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.849092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.849311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.849561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.849574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.849778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.849949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.849962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.850162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.850447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.850459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.850670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.850856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.850868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.851175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.851432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.851445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.851716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.851962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.851975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.852167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.852376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.852389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.852657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.852948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.852961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.853108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.853293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.853306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.853627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.853889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.853902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.854153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.854410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.854423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.854685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.854937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.854950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.855146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.855280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.855293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.855492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.855753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.855766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.856027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.856198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.856211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.856481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.856724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.856737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.857001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.857273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.857303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.857530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.857748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.857777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.858011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.858302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.858342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.858586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.858818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.858847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.859150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.859403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.859432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.859676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.859856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.859869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.860063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.860182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.860196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.860329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.860591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.860605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.860794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.861079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.861107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.861376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.861649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.861678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.861995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.862214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.862245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.862545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.862781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.862793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.863076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.863280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.863310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.863534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.863830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.863843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.864180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.864420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.864449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.864689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.864938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.864966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.865278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.865513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.865542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.865841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.866143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.866179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.866457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.866758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.866787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.867106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.867426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.867455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.867682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.867885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.867913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.868217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.868437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.868465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.868788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.869003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.869032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.869336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.869651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.869680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.869962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.870252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.870282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.870579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.870860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.870894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.871113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.871276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.871306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.480 qpair failed and we were unable to recover it. 00:32:35.480 [2024-07-11 14:02:37.871608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.871812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.480 [2024-07-11 14:02:37.871840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.872051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.872381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.872411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.872689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.872927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.872955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.873175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.873478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.873507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.873831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.874145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.874181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.874494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.874740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.874768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.875017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.875292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.875322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.875484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.875776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.875805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.876098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.876383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.876418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.876630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.876926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.876955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.877252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.877578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.877607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.877829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.877994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.878022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.878249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.878476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.878489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.878761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.878990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.879019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.879270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.879591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.879620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.879844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.880093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.880121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.880427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.880661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.880690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.880931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.881127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.881156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.881442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.881743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.881772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.882104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.882343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.882373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.882597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.882896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.882925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.883245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.883520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.883548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.883847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.884142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.884180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.884462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.884686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.884715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.884944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.885169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.885199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.885498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.885705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.885733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.886037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.886265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.886295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.886574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.886881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.886894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.887070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.887348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.887378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.887736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.887953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.887981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.888214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.888534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.888562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.888797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.889026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.889040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.889173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.889441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.889470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.889616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.889841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.889870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.890177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.890502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.890531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.890771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.891054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.891083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.891312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.891609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.891638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.891849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.892120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.892149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.892435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.892644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.892673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.892970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.893235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.893249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.893533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.893803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.893831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.894156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.894475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.894503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.894732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.895005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.895034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.895274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.895502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.895531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.895790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.896036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.896065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.896304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.896607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.896637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.896962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.897267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.897299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.481 qpair failed and we were unable to recover it. 00:32:35.481 [2024-07-11 14:02:37.897622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.897828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.481 [2024-07-11 14:02:37.897856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.898030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.898233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.898262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.898566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.898822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.898852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.899143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.899432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.899463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.899715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.899866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.899895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.900191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.900384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.900397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.900667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.900847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.900876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.901182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.901479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.901507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.901783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.902099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.902131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.902361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.902662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.902690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.902858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.903175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.903206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.903440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.903662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.903692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.904000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.904180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.904196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.904495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.904721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.904734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.905032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.905222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.905235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.905541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.905777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.905807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.906063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.906278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.906308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.906521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.906741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.906769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.907058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.907316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.907346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.907579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.907798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.907810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.908121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.908311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.908325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.908450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.908727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.908756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.908935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.909214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.909243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.909559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.909879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.909901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.910109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.910383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.910396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.482 [2024-07-11 14:02:37.910581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.910802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.482 [2024-07-11 14:02:37.910818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.482 qpair failed and we were unable to recover it. 00:32:35.752 [2024-07-11 14:02:37.911098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.752 [2024-07-11 14:02:37.911355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.752 [2024-07-11 14:02:37.911388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.752 qpair failed and we were unable to recover it. 00:32:35.752 [2024-07-11 14:02:37.911668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.752 [2024-07-11 14:02:37.912015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.752 [2024-07-11 14:02:37.912044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.752 qpair failed and we were unable to recover it. 00:32:35.752 [2024-07-11 14:02:37.912345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.752 [2024-07-11 14:02:37.912581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.752 [2024-07-11 14:02:37.912609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.752 qpair failed and we were unable to recover it. 00:32:35.752 [2024-07-11 14:02:37.912770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.752 [2024-07-11 14:02:37.913067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.913095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.913330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.913514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.913544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.913765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.913985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.914014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.914320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.914558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.914587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.914869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.915076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.915089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.915351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.915540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.915553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.915830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.916123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.916152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.916470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.916683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.916713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.916970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.917269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.917300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.917549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.917843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.917872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.918167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.918365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.918378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.918629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.918850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.918879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.919102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.919394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.919425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.919706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.919893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.919922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.920132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.920439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.920468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.920794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.921067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.921097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.921378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.921595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.921624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.921905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.922207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.922237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.922414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.922692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.922721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.922874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.923195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.923226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.923543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.923763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.923792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.924109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.924411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.924441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.924768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.924985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.925014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.925268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.925577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.925607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.925939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.926109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.926138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.926469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.926695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.926708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.926986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.927143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.927182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.927339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.927646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.927675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.927903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.928122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.928151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.928378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.928658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.928687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.928983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.929120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.753 [2024-07-11 14:02:37.929133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.753 qpair failed and we were unable to recover it. 00:32:35.753 [2024-07-11 14:02:37.929369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.929584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.929612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.929834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.930131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.930181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.930437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.930733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.930762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.930986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.931217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.931253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.931591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.931890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.931919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.932134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.932387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.932418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.932650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.932953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.932982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.933207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.933447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.933461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.933641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.933941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.933970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.934186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.934436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.934466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.934700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.934906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.934935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.935111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.935355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.935385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.935695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.935971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.936001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.936255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.936483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.936513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.936692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.937024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.937054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.937307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.937482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.937511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.937857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.938127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.938156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.938391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.938686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.938716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.938947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.939231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.939262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.939550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.939770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.939783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.939926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.940248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.940280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.940510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.940689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.940717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.941025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.941197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.941228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.941464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.941631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.941660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.941906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.942225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.942255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.942538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.942768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.942798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.943121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.943392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.943422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.943655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.943904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.943933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.944244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.944459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.944488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.944713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.944949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.944978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.754 qpair failed and we were unable to recover it. 00:32:35.754 [2024-07-11 14:02:37.945177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.754 [2024-07-11 14:02:37.945394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.945407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.945684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.945988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.946017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.946349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.946513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.946542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.946836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.947020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.947049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.947371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.947675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.947704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.947954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.948246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.948259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.948461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.948605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.948619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.948759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.948905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.948935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.949248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.949484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.949513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.949816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.950097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.950128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.950446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.950676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.950704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.950955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.951106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.951135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.951461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.951670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.951699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.952004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.952268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.952299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.952612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.952932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.952961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.953121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.953366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.953396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.953653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.953950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.953978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.954228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.954460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.954488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.954724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.955005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.955034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.955283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.955607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.955636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.955893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.956121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.956151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.956396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.956612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.956641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.956925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.957190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.957221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.957529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.957785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.957814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.958097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.958324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.958360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.958668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.958915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.958928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.959188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.959505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.959534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.959771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.960090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.960120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.960426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.960677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.960706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.961005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.961283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.961313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.755 [2024-07-11 14:02:37.961662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.961907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.755 [2024-07-11 14:02:37.961936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.755 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.962220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.962454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.962484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.962796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.963002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.963031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.963330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.963453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.963466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.963739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.963949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.963978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.964271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.964549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.964578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.964888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.965180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.965193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.965403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.965702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.965731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.966072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.966303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.966334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.966581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.966780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.966793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.967049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.967320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.967350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.967659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.967889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.967918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.968226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.968568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.968596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.968851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.969123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.969136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.969453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.969799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.969828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.970112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.970456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.970487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.970785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.971011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.971040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.971324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.971557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.971585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.971816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.972059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.972088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.972396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.972709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.972747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.973060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.973304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.973334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.973634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.973964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.973993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.974221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.974401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.974432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.974686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.974963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.974992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.975275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.975467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.975497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.975727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.976025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.976054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.976343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.976606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.976634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.976804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.977094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.977123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.977427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.977759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.977788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.978028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.978310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.978340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.978641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.978971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.979000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.756 qpair failed and we were unable to recover it. 00:32:35.756 [2024-07-11 14:02:37.979226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.756 [2024-07-11 14:02:37.979521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.979549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.979876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.980174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.980188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.980474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.980799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.980828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.981064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.981309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.981323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.981531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.981807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.981836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.982098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.982401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.982431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.982666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.982969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.982998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.983310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.983643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.983672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.983983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.984239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.984269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.984558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.984808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.984837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.985082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.985391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.985422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.985659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.985874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.985887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.986088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.986351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.986382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.986550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.986848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.986878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.987171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.987505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.987539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.987861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.988181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.988212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.988428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.988735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.988764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.989096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.989408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.989438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.989695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.990003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.990032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.990299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.990504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.990517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.990713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.990894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.990923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.991185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.991520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.991549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.991710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.991860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.991889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.992201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.992472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.992485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.992778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.992935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.992969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.993258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.993497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.993526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.757 qpair failed and we were unable to recover it. 00:32:35.757 [2024-07-11 14:02:37.993779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.757 [2024-07-11 14:02:37.994035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.994049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:37.994274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.994461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.994474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:37.994754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.994967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.994997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:37.995307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.995502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.995515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:37.995696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.995984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.996013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:37.996333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.996506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.996534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:37.996861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.997105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.997135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:37.997463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.997771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.997800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:37.998135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.998384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.998398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:37.998695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.998870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.998898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:37.999202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.999437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:37.999465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:37.999770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.000106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.000136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.000448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.000766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.000795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.000958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.001193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.001208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.001474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.001778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.001808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.002133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.002379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.002409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.002737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.002952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.002966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.003149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.003428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.003459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.003714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.003884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.003898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.004090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.004372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.004403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.004704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.005039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.005068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.005381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.005609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.005638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.005945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.006240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.006271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.006528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.006951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.006987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.007247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.007472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.007487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.007695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.007924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.007954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.008179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.008406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.008436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.008652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.008944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.008974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.009206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.009512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.009541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.009876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.010184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.010216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.758 [2024-07-11 14:02:38.010434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.010661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.758 [2024-07-11 14:02:38.010690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.758 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.010960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.011237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.011267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.011434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.011753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.011782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.012110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.012427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.012458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.012772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.013096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.013125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.013392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.013652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.013681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.013967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.014243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.014257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.014493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.014763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.014793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.015098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.015433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.015463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.015721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.016025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.016039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.016345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.016683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.016712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.017013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.017219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.017250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.017481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.017851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.017880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.018059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.018293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.018324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.018553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.018858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.018886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.019214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.019467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.019480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.019784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.020040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.020069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.020329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.020535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.020564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.020816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.021121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.021150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.021414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.021715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.021749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.022093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.022388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.022420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.022721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.022977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.023007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.023236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.023551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.023580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.023895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.024205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.024235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.024527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.024808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.024838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.025199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.025434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.025465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.025786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.026107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.026137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.026401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.026634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.026664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.026925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.027208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.027222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.759 qpair failed and we were unable to recover it. 00:32:35.759 [2024-07-11 14:02:38.027351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.027584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.759 [2024-07-11 14:02:38.027614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.027910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.028190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.028221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.028566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.028865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.028895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.029226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.029459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.029488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.029741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.030014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.030027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.030248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.030538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.030568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.030793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.031107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.031137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.031462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.031775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.031805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.031973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.032225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.032256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.032574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.032805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.032834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.033174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.033469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.033498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.033813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.034023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.034052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.034357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.034612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.034641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.034858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.035186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.035217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.035533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.035823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.035853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.036068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.036364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.036379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.036585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.036760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.036773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.037005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.037190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.037204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.037355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.037517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.037545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.037761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.037933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.037962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.038291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.038584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.038613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.038867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.039178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.039209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.039371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.039660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.039690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.040026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.040284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.040331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.040517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.040786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.040815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.041060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.041358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.041373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.041652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.041905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.041919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.042118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.042332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.042363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.042664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.042885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.042914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.043246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.043554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.043584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.043912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.044221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.760 [2024-07-11 14:02:38.044252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.760 qpair failed and we were unable to recover it. 00:32:35.760 [2024-07-11 14:02:38.044563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.044851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.044880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.045185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.045436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.045470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.045768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.046005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.046034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.046196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.046401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.046430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.046745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.047062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.047092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.047441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.047740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.047770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.048005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.048334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.048365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.048682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.048926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.048955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.049198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.049528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.049557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.049804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.050026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.050056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.050299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.050587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.050621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.050886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.051119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.051133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.051397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.051679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.051709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.052077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.052379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.052393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.052700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.052988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.053017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.053243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.053483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.053512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.053823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.054079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.054108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.054449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.054755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.054783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.055076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.055291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.055321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.055576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.055863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.055906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.056205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.056512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.056542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.056808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.057069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.057098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.057311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.057574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.057603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.057922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.058176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.058190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.058451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.058582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.058596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.058859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.059045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.059059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.761 qpair failed and we were unable to recover it. 00:32:35.761 [2024-07-11 14:02:38.059368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.059660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.761 [2024-07-11 14:02:38.059689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.060041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.060305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.060336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.060631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.060901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.060930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.061182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.061412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.061441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.061779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.061926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.061957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.062228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.062536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.062550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.062734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.062928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.062942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.063216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.063408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.063422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.063742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.064032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.064046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.064334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.064480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.064494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.064689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.064995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.065009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.065217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.065498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.065512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.065768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.065999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.066013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.066322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.066507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.066521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.066740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.066999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.067012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.067203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.067417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.067431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.067720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.067856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.067869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.068094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.068346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.068360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.068643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.068903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.068917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.069138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.069337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.069352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.069637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.069905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.069919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.070205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.070404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.070418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.070703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.071016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.071030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.071224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.071505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.071519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.071828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.072023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.072037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.072172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.072290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.072303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.072582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.072801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.072814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.073100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.073300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.073315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.073509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.073778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.073792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.074052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.074264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.074278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.074509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.074690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.762 [2024-07-11 14:02:38.074704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.762 qpair failed and we were unable to recover it. 00:32:35.762 [2024-07-11 14:02:38.074889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.075077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.075092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.075232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.075488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.075503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.075709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.075915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.075929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.076125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.076389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.076404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.076722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.076880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.076896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.077091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.077276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.077291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.077498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.077706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.077737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.077991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.078216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.078229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.078508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.078790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.078804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.079085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.079340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.079354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.079503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.079689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.079703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.080006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.080227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.080242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.080452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.080574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.080589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.080846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.080972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.080986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.081191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.081399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.081415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.081636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.081863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.081876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.082171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.082351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.082366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.082570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.082848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.082861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.083170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.083370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.083384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.083643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.083838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.083851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.084130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.084361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.084375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.084561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.084760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.084773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.085041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.085171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.085186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.085468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.085755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.085768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.086030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.086233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.086247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.086470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.086775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.086789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.087064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.087331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.087345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.087575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.087833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.087846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.088145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.088423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.088437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.088684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.088812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.088826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.763 [2024-07-11 14:02:38.089092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.089236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.763 [2024-07-11 14:02:38.089251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.763 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.089412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.089603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.089617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.089838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.090097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.090111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.090410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.090659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.090673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.090984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.091171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.091186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.091377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.091661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.091674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.091953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.092206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.092220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.092530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.092750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.092764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.093047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.093344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.093358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.093570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.093772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.093786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.093974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.094251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.094265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.094556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.094810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.094823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.095071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.095297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.095311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.095444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.095744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.095757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.096053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.096247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.096261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.096539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.096796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.096809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.097046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.097241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.097255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.097531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.097747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.097760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.098043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.098360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.098373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.098580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.098773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.098786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.099042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.099221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.099234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.099510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.099785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.099799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.100051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.100277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.100291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.100543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.100794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.100808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.101114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.101363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.101377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.101654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.101932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.101946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.102220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.102432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.102445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.102648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.102895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.102908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.103112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.103388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.103402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.103682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.103931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.103945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.104129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.104377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.104391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.764 [2024-07-11 14:02:38.104572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.104707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.764 [2024-07-11 14:02:38.104720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.764 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.104997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.105249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.105263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.105389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.105637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.105657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.105851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.106099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.106113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.106392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.106634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.106651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.106856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.107125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.107140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.107348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.107657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.107671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.107940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.108154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.108173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.108421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.108687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.108700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.108929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.109222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.109236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.109518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.109780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.109793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.110064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.110318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.110332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.110584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.110856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.110869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.111091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.111374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.111405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.111691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.111866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.111895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.112233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.112536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.112570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.112868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.113201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.113238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.113438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.113687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.113700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.113990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.114151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.114193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.114438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.114740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.114769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.115098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.115313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.115344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.115654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.115977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.116006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.116322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.116625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.116654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.116991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.117238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.117269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.117512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.117758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.117787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.118106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.118340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.118370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.118675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.118938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.118977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.119236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.119547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.119577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.765 qpair failed and we were unable to recover it. 00:32:35.765 [2024-07-11 14:02:38.119859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.765 [2024-07-11 14:02:38.120178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.120208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.120524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.120804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.120833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.120991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.121276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.121307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.121535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.121717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.121746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.122033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.122254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.122284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.122527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.122738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.122767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.123009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.123219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.123250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.123486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.123791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.123821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.124169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.124463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.124493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.124827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.124994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.125024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.125332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.125622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.125652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.125818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.126062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.126091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.126414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.126647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.126676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.126934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.127237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.127268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.127593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.127859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.127872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.128077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.128393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.128422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.128667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.128902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.128931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.129237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.129569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.129598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.129881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.130105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.130134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.130380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.130686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.130714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.130940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.131249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.131279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.131514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.131699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.131728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.131943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.132229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.132244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.132452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.132754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.132784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.133087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.133338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.133369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.133607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.133838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.133866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.134016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.134296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.134327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.134554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.134851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.134891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.135111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.135431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.135462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.135781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.136095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.136124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.766 [2024-07-11 14:02:38.136457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.136627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.766 [2024-07-11 14:02:38.136656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.766 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.136893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.137218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.137250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.137582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.137861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.137890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.138187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.138485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.138514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.138753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.139051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.139080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.139374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.139584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.139612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.139851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.140134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.140171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.140430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.140710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.140739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.141001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.141216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.141230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.141409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.141672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.141702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.141863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.142039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.142068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.142282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.142560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.142588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.142824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.143117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.143146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.143498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.143824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.143837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.143949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.144175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.144205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.144455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.144736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.144765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.145076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.145394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.145425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.145592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.145803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.145831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.146071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.146321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.146350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.146689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.146946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.146975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.147188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.147433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.147471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.147758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.147991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.148019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.148199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.148459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.148488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.148654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.148957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.148986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.149249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.149495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.149524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.149832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.150070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.150100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.150355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.150665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.150694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.150950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.151184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.151215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.151528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.151831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.151861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.152148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.152447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.152484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.152738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.152968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.152998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.767 [2024-07-11 14:02:38.153318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.153502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.767 [2024-07-11 14:02:38.153531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.767 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.153788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.154042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.154071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.154291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.154521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.154534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.154738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.154990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.155003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.155233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.155527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.155556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.155862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.156144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.156185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.156444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.156749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.156778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.157111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.157425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.157456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.157675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.157981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.158009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.158319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.158546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.158575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.158816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.159063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.159093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.159389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.159591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.159605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.159917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.160206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.160236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.160548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.160837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.160867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.161093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.161408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.161438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.161723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.161987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.162016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.162328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.162590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.162619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.162910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.163215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.163250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.163605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.163775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.163805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.164028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.164339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.164370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.164694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.165018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.165048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.165370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.165695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.165723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.165961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.166257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.166288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.166463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.166696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.166715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.166930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.167224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.167254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.167514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.167747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.167776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.168093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.168292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.168321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.168659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.168887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.168916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.169247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.169537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.169566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.169905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.170181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.170211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.170522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.170693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.170722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.768 [2024-07-11 14:02:38.170973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.171206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.768 [2024-07-11 14:02:38.171237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.768 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.171560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.171696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.171709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.171991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.172230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.172261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.172572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.172887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.172917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.173170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.173458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.173488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.173841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.174179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.174209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.174518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.174831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.174860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.175156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.175502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.175531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.175747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.176057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.176086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.176406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.176733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.176762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.177086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.177315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.177329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.177615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.177865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.177894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.178192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.178483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.178512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.178833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.179183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.179214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.179524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.179758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.179772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.180046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.180247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.180262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.180404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.180686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.180715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.180914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.181207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.181237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.181468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.181756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.181786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.182131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.182449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.182479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.182648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.182933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.182961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.183311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.183641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.183671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.183987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.184317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.184349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.184664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.184909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.184938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.185192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.185505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.185534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.185786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.186081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.186110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.186333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.186618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.186647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.186904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.187256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.187287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.187534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.187826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.187855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.188155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.188430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.188469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.188704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.188986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.188999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.189271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.189504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.769 [2024-07-11 14:02:38.189533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.769 qpair failed and we were unable to recover it. 00:32:35.769 [2024-07-11 14:02:38.189794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.190080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.190109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.190435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.190673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.190702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.190964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.191255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.191285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.191554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.191793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.191822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.192148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.192348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.192362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.192644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.192926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.192961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.193113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.193413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.193444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.193674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.193889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.193918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.194173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.194502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.194515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.194661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.194863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.194877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.195197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.195356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.195385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.195676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.196031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.196061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.196379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.196640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.196669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:35.770 [2024-07-11 14:02:38.196982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.197271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:35.770 [2024-07-11 14:02:38.197301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:35.770 qpair failed and we were unable to recover it. 00:32:36.041 [2024-07-11 14:02:38.197544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.041 [2024-07-11 14:02:38.197762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.041 [2024-07-11 14:02:38.197793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.041 qpair failed and we were unable to recover it. 00:32:36.041 [2024-07-11 14:02:38.197978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.041 [2024-07-11 14:02:38.198257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.041 [2024-07-11 14:02:38.198294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.041 qpair failed and we were unable to recover it. 00:32:36.041 [2024-07-11 14:02:38.198510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.041 [2024-07-11 14:02:38.198698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.041 [2024-07-11 14:02:38.198728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.041 qpair failed and we were unable to recover it. 00:32:36.041 [2024-07-11 14:02:38.199049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.199339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.199370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.199596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.199833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.199847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.200058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.200312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.200326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.200541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.200734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.200763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.201077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.201292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.201322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.201570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.201763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.201777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.201918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.202203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.202234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.202474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.202687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.202717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.202939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.203153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.203206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.203548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.203857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.203887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.204224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.204531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.204560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.204918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.205245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.205286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.205506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.205788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.205817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.206107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.206271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.206286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.206495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.206765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.206795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.207015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.207177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.207207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.207444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.207668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.207696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.208010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.208229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.208244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.208382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.208578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.208592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.208910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.209245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.209275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.209586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.209762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.209791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.210109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.210425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.210455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.210772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.210986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.211015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.211234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.211545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.211574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.211817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.212065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.212094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.212358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.212606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.212635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.212871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.213020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.213049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.213289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.213529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.213558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.213871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.214189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.214219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.214477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.214786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.214816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.042 [2024-07-11 14:02:38.215049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.215301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.042 [2024-07-11 14:02:38.215331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.042 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.215646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.215881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.215910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.216236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.216559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.216587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.216912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.217236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.217266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.217555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.217766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.217795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.218100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.218390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.218419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.218769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.218946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.218975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.219319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.219586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.219615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.219857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.220175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.220206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.220379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.220668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.220699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.220949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.221182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.221213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.221445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.221749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.221779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.222120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.222383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.222414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.222707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.223020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.223049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.223239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.223550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.223579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.223741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.223957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.223987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.224279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.224459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.224488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.224813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.225069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.225098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.225388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.225723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.225752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.226019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.226307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.226343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.226565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.226873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.226903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.227142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.227367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.227382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.227650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.227882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.227911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.228210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.228521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.228535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.228749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.229002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.229032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.229346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.229581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.229610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.229937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.230244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.230274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.230513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.230821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.230850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.231068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.231375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.231406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.231712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.232049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.232078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.232343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.232595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.232624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.043 qpair failed and we were unable to recover it. 00:32:36.043 [2024-07-11 14:02:38.232934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.043 [2024-07-11 14:02:38.233241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.233272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.233498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.233710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.233723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.233953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.234283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.234314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.234641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.234961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.234991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.235307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.235617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.235630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.235837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.236065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.236094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.236331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.236623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.236653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.236875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.237189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.237219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.237464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.237708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.237738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.238057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.238373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.238403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.238642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.238949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.238978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.239301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.239586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.239615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.239929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.240186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.240218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.240456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.240718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.240748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.241039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.241373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.241404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.241716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.241934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.241963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.242304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.242560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.242589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.242809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.243120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.243150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.243427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.243636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.243649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.243944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.244255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.244287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.244607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.244772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.244801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.245047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.245358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.245388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.245656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.245987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.246017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.246258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.246470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.246500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.246758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.247031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.247061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.247374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.247601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.247615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.247876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.248212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.248243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.248548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.248748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.248777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.248952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.249246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.249277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.249573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.249823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.249852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.250171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.250460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.044 [2024-07-11 14:02:38.250489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.044 qpair failed and we were unable to recover it. 00:32:36.044 [2024-07-11 14:02:38.250812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.251142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.251180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.251345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.251648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.251677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.252016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.252270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.252301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.252609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.252890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.252919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.253270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.253567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.253596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.253907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.254193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.254223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.254564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.254882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.254912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.255132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.255383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.255413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.255663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.255878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.255912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.256234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.256553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.256583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.256823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.257126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.257139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.257352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.257576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.257590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.257775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.258057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.258070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.258373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.258637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.258666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.258935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.259194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.259225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.259534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.259862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.259891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.260136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.260385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.260416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.260733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.260984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.261012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.261304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.261473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.261502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.261730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.261969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.261999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.262249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.262489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.262518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.262829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.263175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.263205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.263447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.263686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.263715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.264027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.264261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.264291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.264546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.264826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.264854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.265181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.265422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.265450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.265770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.266051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.266091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.266433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.266661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.266691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.266920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.267231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.267263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.267563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.267804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.267818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.045 [2024-07-11 14:02:38.268105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.268341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.045 [2024-07-11 14:02:38.268372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.045 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.268588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.268877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.268906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.269208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.269397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.269426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.269735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.270025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.270055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.270280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.270592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.270622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.270855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.271180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.271211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.271397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1789554 Killed "${NVMF_APP[@]}" "$@" 00:32:36.046 [2024-07-11 14:02:38.271681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.271695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.271895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.272121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.272135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 14:02:38 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:32:36.046 [2024-07-11 14:02:38.272402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.272683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 14:02:38 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:36.046 [2024-07-11 14:02:38.272701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.272971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 14:02:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:36.046 [2024-07-11 14:02:38.273199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.273219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 14:02:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:36.046 [2024-07-11 14:02:38.273480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 14:02:38 -- common/autotest_common.sh@10 -- # set +x 00:32:36.046 [2024-07-11 14:02:38.273686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.273701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.273939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.274219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.274234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.274365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.274621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.274634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.274938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.275060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.275074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.275231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.275414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.275428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.275632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.275841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.275855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.276119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.276321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.276335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.276630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.276913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.276926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.277117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.277356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.277371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.277578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.277856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.277871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.278096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.278378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.278393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.278596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.278901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.278915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.279196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.279349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.279362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 [2024-07-11 14:02:38.279644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 14:02:38 -- nvmf/common.sh@469 -- # nvmfpid=1790363 00:32:36.046 [2024-07-11 14:02:38.279929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 [2024-07-11 14:02:38.279943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.046 qpair failed and we were unable to recover it. 00:32:36.046 14:02:38 -- nvmf/common.sh@470 -- # waitforlisten 1790363 00:32:36.046 [2024-07-11 14:02:38.280140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.046 14:02:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:36.046 [2024-07-11 14:02:38.280272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.280287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.280476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 14:02:38 -- common/autotest_common.sh@819 -- # '[' -z 1790363 ']' 00:32:36.047 [2024-07-11 14:02:38.280681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.280695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 14:02:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.047 [2024-07-11 14:02:38.280905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.281019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.281033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 14:02:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:36.047 [2024-07-11 14:02:38.281178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 14:02:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.047 [2024-07-11 14:02:38.281389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.281405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.281544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 14:02:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:36.047 [2024-07-11 14:02:38.281694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.281708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 14:02:38 -- common/autotest_common.sh@10 -- # set +x 00:32:36.047 [2024-07-11 14:02:38.281894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.282175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.282189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.282450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.282711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.282725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.282993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.283281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.283296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.283558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.283849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.283863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.284066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.284328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.284342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.284626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.284858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.284871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.285155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.285465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.285478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.285763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.286069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.286082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.286283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.286560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.286574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.286766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.286971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.286985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.287193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.287391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.287405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.287601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.287870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.287885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.288015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.288291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.288305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.288537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.288744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.288758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.289039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.289323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.289338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.289469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.289677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.289691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.289929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.290213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.290227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.290538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.290780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.290797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.291009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.291287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.291302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.291510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.291693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.291708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.291908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.292108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.292124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.292389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.292505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.292518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.292804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.292998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.293012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.293313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.293520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.047 [2024-07-11 14:02:38.293534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.047 qpair failed and we were unable to recover it. 00:32:36.047 [2024-07-11 14:02:38.293793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.294069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.294084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.294287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.294579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.294592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.294877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.295158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.295177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.295384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.295637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.295651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.295939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.296206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.296220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.296457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.296739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.296753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.297034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.297311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.297326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.297531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.297738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.297752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.298008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.298309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.298323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.298532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.298803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.298816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.299117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.299323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.299337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.299577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.299771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.299784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.299931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.300114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.300127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.300318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.300596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.300610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.300832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.301031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.301044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.301230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.301506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.301520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.301668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.301942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.301957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.302173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.302374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.302387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.302693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.302917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.302931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.303189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.303456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.303471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.303740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.303991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.304005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.304277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.304535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.304548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.304758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.304961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.304974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.305177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.305368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.305381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.305591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.305799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.305813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.305996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.306265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.306279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.306537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.306798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.306812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.307080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.307264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.307279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.307595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.307876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.307889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.308104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.308286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.308301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.308584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.308781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.308795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.048 qpair failed and we were unable to recover it. 00:32:36.048 [2024-07-11 14:02:38.308998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.048 [2024-07-11 14:02:38.309193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.309207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.309395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.309595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.309609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.309891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.310182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.310196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.310467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.310665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.310680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.310991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.311269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.311283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.311545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.311817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.311831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.312101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.312280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.312294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.312571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.312774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.312787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.312995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.313187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.313201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.313385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.313671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.313685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.313868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.314061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.314074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.314346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.314548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.314562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.314752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.315018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.315032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.315233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.315416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.315433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.315586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.315826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.315840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.316062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.316318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.316332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.316538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.316811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.316825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.317008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.317344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.317359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.317682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.317903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.317916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.318177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.318451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.318464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.318739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.318920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.318933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.319151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.319419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.319433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.319636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.319919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.319932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.320205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.320399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.320412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.320741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.320941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.320954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.321221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.321400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.321413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.321743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.322021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.322035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.322243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.322504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.322518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.322788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.323042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.323056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.323313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.323590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.323603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.323859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.324004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.049 [2024-07-11 14:02:38.324017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.049 qpair failed and we were unable to recover it. 00:32:36.049 [2024-07-11 14:02:38.324237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.324458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.324471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.324723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.325070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.325084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.325397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.325523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.325536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.325842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.326060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.326074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.326343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.326497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.326510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.326609] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:36.050 [2024-07-11 14:02:38.326651] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.050 [2024-07-11 14:02:38.326693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.326812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.326824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.327082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.327331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.327345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.327540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.327665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.327678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.327883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.328076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.328090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.328302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.328440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.328454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.328735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.328878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.328891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.329148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.329358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.329373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.329666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.329873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.329887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.330171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.330394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.330407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.330686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.330999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.331013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.331310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.331513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.331527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.331799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.332068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.332082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.332361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.332564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.332578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.332876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.333137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.333151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.333371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.333583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.333597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.333808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.334079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.334093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.334358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.334548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.334561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.334854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.335125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.335142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.335407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.335679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.335692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.335951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.336085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.336098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.050 [2024-07-11 14:02:38.336294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.336552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.050 [2024-07-11 14:02:38.336565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.050 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.336857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.337138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.337151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.337359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.337644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.337658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.337859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.338142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.338155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.338376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.338568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.338581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.338831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.339033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.339047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.339232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.339524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.339537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.339741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.339938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.339955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.340238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.340459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.340473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.340751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.340971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.340984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.341266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.341442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.341455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.341743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.341868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.341881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.342148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.342366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.342379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.342629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.342885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.342899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.343181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.343454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.343467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.343690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.343937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.343951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.344170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.344466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.344480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.344748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.344926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.344939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.345135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.345287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.345302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.345582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.345828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.345841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.346103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.346210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.346224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.346498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.346698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.346711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.346960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.347153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.347171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.347454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.347677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.347690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.347937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.348202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.348216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.348480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.348691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.348704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.348901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.349193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.349207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.349337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.349603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.349616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.349866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.350058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.350071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.350272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.350526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.350539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.350837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.351105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.351119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.051 [2024-07-11 14:02:38.351292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.351486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.051 [2024-07-11 14:02:38.351499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.051 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.351777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.351994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.352008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.352212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.352488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.352501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.352751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.353004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.353018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.353210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.353417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.353431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.353632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.353946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.353960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.354096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.354361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.354376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.354574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.354765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.354779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.355034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.355296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.355310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.355435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.052 [2024-07-11 14:02:38.355630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.355643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.355831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.355952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.355965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.356143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.356391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.356405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.356657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.356928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.356942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.357135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.357324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.357338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.357595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.357866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.357880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.358060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.358305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.358320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.358444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.358730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.358744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.358937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.359217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.359232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.359528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.359788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.359801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.360022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.360283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.360296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.360523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.360734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.360747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.360966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.361222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.361235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.361431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.361670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.361683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.361957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.362214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.362228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.362525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.362790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.362803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.362944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.363196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.363211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.363403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.363645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.363659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.363904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.364119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.364135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.364398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.364593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.364606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.364890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.365174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.365188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.365387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.365600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.365613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.052 [2024-07-11 14:02:38.365811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.366065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.052 [2024-07-11 14:02:38.366079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.052 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.366325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.366515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.366528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.366721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.366909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.366923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.367181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.367376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.367391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.367598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.367729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.367742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.367854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.368128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.368142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.368360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.368603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.368619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.368824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.369027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.369041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.369315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.369561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.369574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.369778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.370053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.370066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.370348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.370618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.370631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.370841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.371056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.371069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.371345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.371482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.371496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.371737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.371977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.371990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.372186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.372450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.372463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.372646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.372892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.372905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.373119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.373339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.373352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.373597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.373854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.373867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.374053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.374316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.374330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.374603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.374789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.374801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.375089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.375351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.375365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.375556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.375756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.375770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.375978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.376196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.376209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.376479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.376754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.376768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.377014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.377208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.377222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.377492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.377703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.377716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.377902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.378170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.378185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.378402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.378665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.378679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.378974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.379240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.379254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.379445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.379641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.379653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.379906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.380163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.380176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.053 [2024-07-11 14:02:38.380440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.380709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.053 [2024-07-11 14:02:38.380722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.053 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.380848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.381106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.381119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.381312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.381550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.381563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.381765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.381953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.381966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.382215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.382355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.382368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.382552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.382745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.382757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.383024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.383219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.383233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.383439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.383691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.383704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.383919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.384169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.384182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.384425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.384609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.384622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.384915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.385057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.385070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.385272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.385444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.385457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.385635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.385805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.385818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.386012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.386214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.386228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.386494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.386793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.386806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.387071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.387321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.387335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.387590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.387852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.387865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.388107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.388289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.388303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.388572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.388830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.388843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.389083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.389353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.389366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.389625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.389811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.389824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.390033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.390169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.390183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.390457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.390730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.390743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.390950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.391215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.391229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.391493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.391734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.391747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.391931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.392171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.392185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.392451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.392713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.392729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.054 [2024-07-11 14:02:38.392867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.393118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.054 [2024-07-11 14:02:38.393131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.054 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.393365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.393550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.393563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.393805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.393939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.393953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.394090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.394345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.394360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.394570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.394810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.394823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.395007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.395272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.395286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.395531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.395785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.395798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.395980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.396223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.396236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.396502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.396618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.396630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.396834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.397099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.397112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.397407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.397596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.397609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.397824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.398006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.398019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.398131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:36.055 [2024-07-11 14:02:38.398294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.398489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.398502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.398773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.398957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.398971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.399264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.399454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.399467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.399741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.399875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.399888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.400134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.400360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.400375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.400636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.400873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.400886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.401166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.401410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.401423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.401609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.401873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.401887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.402157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.402405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.402419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.402594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.402809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.402822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.403072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.403261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.403274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.403540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.403739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.403753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.404031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.404164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.404178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.404444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.404689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.404702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.404885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.405058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.405072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.405361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.405550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.405564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.405777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.406044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.406058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.406346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.406559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.406572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.406768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.407033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.407048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.055 qpair failed and we were unable to recover it. 00:32:36.055 [2024-07-11 14:02:38.407241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.055 [2024-07-11 14:02:38.407419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.407433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.407701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.407899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.407914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.408188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.408362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.408375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.408569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.408774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.408787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.409030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.409332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.409346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.409561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.409728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.409741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.409933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.410172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.410186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.410400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.410665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.410678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.410920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.411184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.411199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.411458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.411688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.411708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.411967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.412168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.412183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.412375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.412635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.412648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.412915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.413086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.413098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.413287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.413421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.413434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.413631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.413841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.413854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.414030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.414295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.414308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.414570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.414809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.414822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.415082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.415286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.415300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.415522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.415783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.415796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.416080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.416349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.416362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.416635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.416805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.416819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.417085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.417269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.417284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.417554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.417742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.417757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.418038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.418331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.418347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.418614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.418887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.418903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.419099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.419393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.419408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.419655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.419905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.419920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.420169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.420357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.420370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.420581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.420836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.420850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.421027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.421220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.421236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.421449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.421638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.421652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.056 qpair failed and we were unable to recover it. 00:32:36.056 [2024-07-11 14:02:38.421852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.056 [2024-07-11 14:02:38.422128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.422142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.422291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.422559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.422574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.422870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.423129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.423142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.423413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.423535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.423549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.423761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.423981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.423996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.424185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.424360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.424374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.424667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.424958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.424973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.425216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.425397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.425411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.425624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.425835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.425849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.426143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.426404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.426418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.426663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.426928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.426942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.427086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.427334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.427347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.427593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.427727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.427740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.428006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.428262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.428276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.428491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.428781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.428794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.429004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.429139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.429151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.429372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.429612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.429626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.429808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.430066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.430079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.430370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.430547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.430560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.430822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.431082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.431095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.431354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.431604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.431617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.431830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.432018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.432031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.432328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.432501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.432514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.432788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.433036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.433049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.433242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.433505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.433518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.433780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.434023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.434036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.434296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.434560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.434572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.434835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.435069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.435082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.435272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.435482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.435499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.435675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.435804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.435818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.057 [2024-07-11 14:02:38.436082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.436202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.057 [2024-07-11 14:02:38.436215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.057 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.436468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.436719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.436733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.436998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.437198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.437213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.437337] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:36.058 [2024-07-11 14:02:38.437448] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.058 [2024-07-11 14:02:38.437456] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.058 [2024-07-11 14:02:38.437463] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.058 [2024-07-11 14:02:38.437511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.437576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:32:36.058 [2024-07-11 14:02:38.437770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.437682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:32:36.058 [2024-07-11 14:02:38.437783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 [2024-07-11 14:02:38.437787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.437789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:32:36.058 [2024-07-11 14:02:38.438000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.438260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.438274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.438507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.438750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.438763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.438953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.439224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.439238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.439480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.439694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.439707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.439951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.440214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.440227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.440493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.440682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.440695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.440964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.441202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.441215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.441409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.441705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.441718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.441848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.442028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.442040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.442223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.442485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.442498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.442813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.443077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.443090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.443357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.443527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.443540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.443724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.443845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.443858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.444052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.444322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.444336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.444647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.444895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.444909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.445042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.445312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.445326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.445510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.445696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.445710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.445925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.446099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.446113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.446376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.446590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.446603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.446818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.447032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.447046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.447295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.447546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.447560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.447803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.448044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.448058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.448235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.448423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.448440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.448626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.448866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.448880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.058 qpair failed and we were unable to recover it. 00:32:36.058 [2024-07-11 14:02:38.449142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.449321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.058 [2024-07-11 14:02:38.449359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.449568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.449741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.449754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.449889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.450155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.450174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.450294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.450550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.450565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.450744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.450983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.450997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.451268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.451543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.451557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.451833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.452131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.452147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.452301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.452499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.452512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.452783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.453054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.453071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.453256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.453498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.453512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.453777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.454035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.454049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.454295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.454510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.454526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.454832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.455099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.455113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.455250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.455428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.455442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.455640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.455845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.455858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.456121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.456293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.456307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.456576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.456838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.456852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.457045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.457334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.457348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.457631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.457881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.457898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.458086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.458271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.458285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.458494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.458616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.458628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.458819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.458947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.458960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.459241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.459507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.459520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.459719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.459909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.459922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.460214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.460390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.460404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.460603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.460728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.460741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.460970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.461100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.461113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.059 qpair failed and we were unable to recover it. 00:32:36.059 [2024-07-11 14:02:38.461356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.059 [2024-07-11 14:02:38.461540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.461553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.461831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.462003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.462019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.462289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.462544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.462556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.462828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.463098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.463111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.463377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.463666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.463680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.463948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.464215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.464229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.464484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.464620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.464633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.464882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.465151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.465168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.465308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.465550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.465564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.465828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.466092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.466106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.466372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.466617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.466630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.466816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.467056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.467070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.467194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.467434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.467447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.467685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.467893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.467907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.468176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.468479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.468493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.468621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.468861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.468875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.469145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.469405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.469419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.469664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.469872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.469886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.470155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.470364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.470378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.470599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.470722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.470735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.470929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.471111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.471125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.471389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.471628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.471641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.471910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.472175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.472190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.472451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.472710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.472723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.472897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.473120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.473133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.473348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.473605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.473619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.473917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.474188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.474202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.474463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.474753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.474766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.475031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.475287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.475300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.475548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.475810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.475823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.476080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.476309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.476323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.060 qpair failed and we were unable to recover it. 00:32:36.060 [2024-07-11 14:02:38.476512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.060 [2024-07-11 14:02:38.476760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.476773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.476913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.477170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.477183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.477361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.477620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.477632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.477839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.478100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.478113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.478301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.478541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.478554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.478749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.478993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.479007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.479122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.479413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.479428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.479700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.479956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.479969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.480264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.480509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.480521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.480653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.480914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.480928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.481119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.481361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.481375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.481551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.481813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.481827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.482118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.482247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.482260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.482440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.482706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.482719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.483009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.483197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.483211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.483456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.483593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.483607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.483891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.484175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.484189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.484433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.484699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.061 [2024-07-11 14:02:38.484713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.061 qpair failed and we were unable to recover it. 00:32:36.061 [2024-07-11 14:02:38.484906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.485175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.485191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.333 qpair failed and we were unable to recover it. 00:32:36.333 [2024-07-11 14:02:38.485381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.485644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.485657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.333 qpair failed and we were unable to recover it. 00:32:36.333 [2024-07-11 14:02:38.485849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.486086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.486101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.333 qpair failed and we were unable to recover it. 00:32:36.333 [2024-07-11 14:02:38.486308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.486550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.486564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.333 qpair failed and we were unable to recover it. 00:32:36.333 [2024-07-11 14:02:38.486762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.487003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.487015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.333 qpair failed and we were unable to recover it. 00:32:36.333 [2024-07-11 14:02:38.487269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.487480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.487493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.333 qpair failed and we were unable to recover it. 00:32:36.333 [2024-07-11 14:02:38.487714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.487997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.488010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.333 qpair failed and we were unable to recover it. 00:32:36.333 [2024-07-11 14:02:38.488288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.488557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.488571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.333 qpair failed and we were unable to recover it. 00:32:36.333 [2024-07-11 14:02:38.488827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.489090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.489105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.333 qpair failed and we were unable to recover it. 00:32:36.333 [2024-07-11 14:02:38.489350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.489612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.489626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.333 qpair failed and we were unable to recover it. 00:32:36.333 [2024-07-11 14:02:38.489815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.490002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.490015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.333 qpair failed and we were unable to recover it. 00:32:36.333 [2024-07-11 14:02:38.490283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.490528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.333 [2024-07-11 14:02:38.490541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.333 qpair failed and we were unable to recover it. 00:32:36.333 [2024-07-11 14:02:38.490675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.490802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.490815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.490994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.491170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.491184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.491395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.491656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.491669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.491935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.492197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.492210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.492477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.492717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.492731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.492991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.493259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.493272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.493449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.493643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.493657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.493928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.494047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.494060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.494262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.494526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.494539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.494680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.494859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.494872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.494999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.495190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.495203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.495377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.495619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.495632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.495906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.496080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.496093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.496289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.496570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.496582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.496785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.496970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.496983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.497199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.497455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.497468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.497746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.498008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.498021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.498284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.498401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.498414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.498704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.499016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.499029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.499282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.499530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.499543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.499833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.500086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.500098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.500342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.500549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.500561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.500830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.501119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.501131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.501373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.501505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.501518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.501704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.501890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.501903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.502162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.502348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.502361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.502547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.502785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.502797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.503012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.503274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.503287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.503428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.503676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.503689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.503956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.504227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.504239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.334 [2024-07-11 14:02:38.504483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.504603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.334 [2024-07-11 14:02:38.504616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.334 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.504883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.505068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.505082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.505277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.505538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.505551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.505743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.506004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.506017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.506262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.506520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.506533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.506800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.507066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.507079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.507255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.507429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.507442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.507681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.507941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.507954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.508220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.508466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.508478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.508693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.508929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.508941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.509138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.509382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.509395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.509654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.509897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.509910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.510033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.510286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.510299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.510546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.510734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.510746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.511015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.511251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.511265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.511461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.511654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.511667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.511917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.512181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.512194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.512448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.512750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.512762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.512968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.513234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.513247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.513530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.513768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.513781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.513974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.514187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.514201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.514449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.514700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.514713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.514980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.515257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.515271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.515465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.515748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.515760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.516033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.516296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.516309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.516542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.516734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.516746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.516944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.517131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.517144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.517342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.517518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.517531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.517722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.517965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.517978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.518105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.518388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.518402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.518689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.518940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.518953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.335 qpair failed and we were unable to recover it. 00:32:36.335 [2024-07-11 14:02:38.519175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.335 [2024-07-11 14:02:38.519491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.519506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.519741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.519999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.520012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.520227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.520411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.520423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.520695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.520884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.520896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.521162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.521401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.521414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.521595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.521858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.521871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.522189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.522295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.522307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.522551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.522762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.522775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.522893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.523069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.523082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.523270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.523557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.523570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.523868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.524163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.524179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.524445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.524709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.524722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.525004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.525189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.525202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.525421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.525598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.525611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.525898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.526170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.526184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.526376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.526615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.526628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.526847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.526981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.526994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.527284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.527542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.527554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.527807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.527992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.528004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.528267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.528476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.528489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.528771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.529063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.529079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.529349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.529613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.529626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.529868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.530135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.530147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.530329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.530515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.530528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.530739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.531002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.531015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.531261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.531448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.531461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.531714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.531978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.531990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.532179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.532378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.532391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.532647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.532861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.532874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.533168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.533368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.533381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.336 [2024-07-11 14:02:38.533650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.533895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.336 [2024-07-11 14:02:38.533911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.336 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.534109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.534372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.534385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.534614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.534851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.534864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.535130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.535394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.535407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.535532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.535794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.535807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.536040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.536305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.536318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.536587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.536825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.536838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.537033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.537217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.537231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.537438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.537684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.537697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.537942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.538210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.538223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.538483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.538724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.538736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.539007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.539203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.539216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.539495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.539805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.539818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.540071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.540241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.540254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.540494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.540618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.540631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.540903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.541165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.541178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.541354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.541570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.541583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.541843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.542123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.542136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.542320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.542579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.542592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.542792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.542973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.542986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.543269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.543457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.543469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.543659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.543914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.543927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.544194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.544443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.544456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.544643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.544852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.544864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.545130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.545413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.545425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.545622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.545824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.545837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.337 qpair failed and we were unable to recover it. 00:32:36.337 [2024-07-11 14:02:38.546026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.337 [2024-07-11 14:02:38.546312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.546325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.546448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.546697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.546710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.546956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.547171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.547184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.547390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.547577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.547590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.547800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.547935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.547947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.548204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.548388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.548401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.548644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.548912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.548925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.549184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.549426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.549438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.549679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.549868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.549880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.550115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.550384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.550397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.550588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.550783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.550796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.550998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.551171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.551184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.551454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.551666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.551679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.551903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.552117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.552130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.552249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.552497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.552510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.552722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.552969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.552982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.553320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.553626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.553639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.553882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.554104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.554117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.554325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.554506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.554519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.554782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.555046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.555059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.555298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.555584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.555597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.555797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.555988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.556001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.556181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.556447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.556460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.556752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.556921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.556934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.557196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.557454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.557467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.557666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.557836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.557849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.558094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.558220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.558233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.558497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.558668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.558680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.558948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.559134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.559147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.559325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.559545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.559557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.338 [2024-07-11 14:02:38.559827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.559995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.338 [2024-07-11 14:02:38.560004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.338 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.560192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.560479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.560489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.560689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.560944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.560953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.561208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.561408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.561418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.561603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.561795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.561804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.562076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.562207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.562216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.562418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.562528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.562538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.562795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.563004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.563014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.563306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.563523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.563540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.563762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.563956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.563967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.564207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.564388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.564398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.564583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.564817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.564833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.565057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.565351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.565363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.565624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.565740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.565749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.565960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.566072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.566087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.566313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.566617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.566632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.566854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.567141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.567157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.567366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.567656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.567669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.567955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.568203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.568218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.568463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.568744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.568757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.568994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.569252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.569265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.569453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.569582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.569595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.569736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.569942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.569955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.570166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.570405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.570418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.570617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.570735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.570748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.570984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.571231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.571251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.571442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.571692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.571706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.571987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.572175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.572190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.572402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.572611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.572625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.572829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.573015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.573029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.573338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.573522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.573536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.339 [2024-07-11 14:02:38.573808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.574004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.339 [2024-07-11 14:02:38.574018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.339 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.574268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.574475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.574488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.574797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.575038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.575051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.575344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.575584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.575597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.575867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.576129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.576142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.576284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.576534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.576547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.576687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.576879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.576892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.577080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.577344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.577358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.577619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.577869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.577882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.578145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.578303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.578316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.578497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.578755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.578769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.578908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.579024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.579037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.579245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.579509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.579522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.579713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.579987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.580000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.580122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.580403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.580420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.580708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.580900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.580913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.581184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.581307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.581320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.581584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.581879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.581892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.582166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.582432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.582445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.582686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.582949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.582962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.583228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.583479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.583492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.583703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.583986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.583999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.584220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.584357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.584370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.584588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.584780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.584793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.584977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.585172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.585187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.585434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.585694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.585707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.585997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.586266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.586281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.586481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.586676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.586689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.586960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.587148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.587171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.587365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.587500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.587512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.587733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.587861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.587875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.340 [2024-07-11 14:02:38.587998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.588262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.340 [2024-07-11 14:02:38.588276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.340 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.588535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.588769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.588782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.589045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.589196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.589209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.589415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.589675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.589688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.589909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.590110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.590123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.590316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.590519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.590532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.590729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.591039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.591052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.591318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.591576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.591590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.591834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.591974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.591987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.592172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.592357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.592370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.592616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.592832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.592846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.592981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.593270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.593283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.593526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.593818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.593831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.594023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.594281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.594294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.594518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.594747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.594761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.595055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.595349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.595366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.595516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.595758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.595771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.595976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.596218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.596232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.596501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.596700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.596712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.596971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.597207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.597221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.597358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.597565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.597577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.597755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.597960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.597973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.598164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.598349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.598361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.598578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.598813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.598826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.598958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.599221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.599235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.599453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.599660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.599673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.599891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.600029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.600042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.600310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.600501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.600514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.600655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.600831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.600845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.601060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.601191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.601206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.601396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.601648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.601660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.341 [2024-07-11 14:02:38.601834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.602111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.341 [2024-07-11 14:02:38.602125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.341 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.602320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.602454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.602468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.602758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.602964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.602977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.603171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.603464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.603477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.603673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.603803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.603815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.604049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.604237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.604251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.604391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.604654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.604667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.604882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.605180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.605194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.605397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.605657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.605671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.605929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.606178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.606192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.606462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.606709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.606723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.606986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.607251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.607265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.607529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.607751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.607764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.608024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.608302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.608317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.608600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.608786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.608799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.609101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.609238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.609251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.609453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.609624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.609637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.609877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.609986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.609999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.610221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.610461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.610474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.610680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.610813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.610826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.611111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.611283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.611297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.611480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.611767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.611779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.611892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.612071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.612083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.612340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.612532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.612545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.612841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.612961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.612974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.613169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.613365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.613378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.613622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.613869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.613881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.342 qpair failed and we were unable to recover it. 00:32:36.342 [2024-07-11 14:02:38.614118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.342 [2024-07-11 14:02:38.614294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.614308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.614495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.614675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.614689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.614872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.615088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.615102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.615346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.615561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.615573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.615773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.615984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.615997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.616185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.616369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.616382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.616507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.616692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.616706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.616902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.617012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.617024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.617212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.617391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.617403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.617513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.617767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.617780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.617913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.618039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.618051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.618313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.618501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.618514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.618786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.618984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.618997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.619212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.619420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.619432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.619676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.619935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.619948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.620220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.620392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.620404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.620645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.620780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.620796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.621050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.621171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.621184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.621361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.621556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.621569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.621849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.622114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.622127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.622258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.622476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.622489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.622680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.622910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.622923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.623186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.623426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.623439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.623706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.623905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.623917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.624225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.624414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.624426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.624564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.624805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.624818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.625060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.625250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.625266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.625398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.625521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.625534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.625773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.625954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.625967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.626096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.626296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.626309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.626549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.626806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.626819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.343 qpair failed and we were unable to recover it. 00:32:36.343 [2024-07-11 14:02:38.627061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.343 [2024-07-11 14:02:38.627293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.627307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.627577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.627759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.627772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.628051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.628291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.628304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.628579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.628831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.628844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.629039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.629246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.629259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.629447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.629725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.629740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.629965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.630151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.630169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.630481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.630744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.630756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.631003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.631224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.631238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.631451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.631636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.631650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.631900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.631991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.632004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.632122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.632424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.632438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.632660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.632846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.632859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.633124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.633386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.633400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.633586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.633777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.633790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.634054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.634242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.634258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.634442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.634637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.634650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.634861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.635091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.635103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.635347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.635577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.635590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.635771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.635969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.635981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.636246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.636438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.636451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.636743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.636997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.637010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.637260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.637465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.637478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.637663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.637921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.637934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.638212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.638481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.638493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.638734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.638996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.639008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.639141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.639347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.639360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.639471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.639719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.639732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.639870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.640132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.640145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.640343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.640455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.640468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.640652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.640859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.640872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.344 qpair failed and we were unable to recover it. 00:32:36.344 [2024-07-11 14:02:38.641045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.344 [2024-07-11 14:02:38.641238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.641251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.641443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.641564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.641577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.641700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.641932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.641945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.642136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.642327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.642340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.642535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.642720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.642733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.642857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.643047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.643060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.643276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.643467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.643479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.643674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.643775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.643788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.644038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.644240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.644253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.644442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.644629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.644642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.644906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.645091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.645104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.645291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.645477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.645490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.645629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.645743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.645756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.646025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.646213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.646226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.646362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.646548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.646561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.646743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.646870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.646883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.647013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.647146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.647162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.647299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.647480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.647493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.647670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.647841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.647853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.648097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.648205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.648218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.648324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.648444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.648457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.648729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.648911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.648924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.649142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.649263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.649276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.649519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.649758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.649770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.649957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.650081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.650094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.650363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.650556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.650569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.650694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.650908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.650921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.651097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.651292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.651305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.651435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.651519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.651532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.651776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.651888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.651901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.652119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.652265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.345 [2024-07-11 14:02:38.652278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.345 qpair failed and we were unable to recover it. 00:32:36.345 [2024-07-11 14:02:38.652479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.652671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.652684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.652944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.653122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.653135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.653273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.653452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.653465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.653647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.653818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.653831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.654018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.654170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.654183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.654295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.654402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.654414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.654570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.654709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.654722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.654850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.654987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.654999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.655177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.655288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.655300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.655493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.655611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.655623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.655755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.656022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.656035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.656210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.656461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.656474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.656721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.656826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.656838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.657038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.657301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.657315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.657504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.657684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.657698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.657884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.657983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.657996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.658184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.658432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.658445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.658634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.658766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.658779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.658888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.659151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.659187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.659335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.659518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.659531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.659635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.659914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.659927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.660114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.660318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.660332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.660415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.660679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.660692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.660945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.661212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.661225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.661451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.661627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.661640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.661813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.662000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.662013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.346 qpair failed and we were unable to recover it. 00:32:36.346 [2024-07-11 14:02:38.662208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.346 [2024-07-11 14:02:38.662350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.662362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.662534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.662706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.662719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.662893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.663106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.663119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.663312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.663430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.663443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.663636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.663851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.663864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.663971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.664221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.664234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.664423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.664675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.664689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.664862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.665129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.665142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.665365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.665485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.665498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.665671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.665884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.665896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.666073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.666317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.666330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.666518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.666748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.666761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.666940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.667120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.667133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.667272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.667363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.667376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.667505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.667607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.667620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.667817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.668004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.668017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.668132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.668246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.668260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.668502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.668691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.668704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.668887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.669130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.669143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.669264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.669439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.669452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.669683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.669863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.669876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.670141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.670385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.670398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.670640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.670901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.670914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.671108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.671374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.671388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.671530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.671700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.671713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.671886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.672072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.672084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.672273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.672347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.672360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.672538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.672716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.672730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.347 qpair failed and we were unable to recover it. 00:32:36.347 [2024-07-11 14:02:38.672859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.347 [2024-07-11 14:02:38.672969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.672982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.673158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.673363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.673375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.673547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.673671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.673684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.673847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.674026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.674039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.674130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.674380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.674394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.674514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.674637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.674651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.674847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.674975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.674988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.675096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.675234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.675247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.675440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.675557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.675570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.675752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.675963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.675976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.676098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.676272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.676286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.676394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.676516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.676529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.676671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.676862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.676875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.677147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.677324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.677337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.677514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.677758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.677770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.678016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.678186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.678199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.678385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.678626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.678639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.678772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.678971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.678984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.679112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.679300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.679313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.679500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.679764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.679777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.679914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.680088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.680101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.680279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.680465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.680478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.680667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.680903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.680916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.681063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.681249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.681263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.681392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.681570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.681583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.681791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.682075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.682088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.682261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.682444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.682457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.682652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.682775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.682788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.682919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.683041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.683053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.683166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.683355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.683368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.348 [2024-07-11 14:02:38.683557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.683683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.348 [2024-07-11 14:02:38.683699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.348 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.683938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.684053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.684066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.684197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.684327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.684340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.684447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.684638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.684650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.684823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.684929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.684942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.685125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.685253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.685266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.685380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.685481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.685494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.685731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.685917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.685929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.686048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.686234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.686247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.686421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.686641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.686653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.686890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.687021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.687037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.687309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.687494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.687507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.687677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.687917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.687930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.688073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.688179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.688192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.688318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.688434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.688447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.688650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.688783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.688795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.688929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.689169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.689182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.689369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.689562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.689574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.689700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.689986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.689999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.690105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.690226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.690239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.690378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.690499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.690514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.690628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.690812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.690825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.691040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.691179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.691192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.691445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.691626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.691639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.691814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.691979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.691992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.692111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.692232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.692245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.692460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.692630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.692642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.692779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.692919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.692932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.693051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.693187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.693200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.693330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.693412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.693425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.693635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.693769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.693782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.349 [2024-07-11 14:02:38.693959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.694066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.349 [2024-07-11 14:02:38.694079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.349 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.694288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.694484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.694496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.694692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.694866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.694878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.695091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.695290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.695304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.695486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.695675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.695687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.695871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.696008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.696021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.696233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.696306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.696319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.696573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.696652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.696665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.696784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.696966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.696981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.697182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.697395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.697407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.697650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.697880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.697894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.698148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.698322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.698335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.698563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.698797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.698810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.698996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.699105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.699118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.699253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.699408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.699422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.699629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.699806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.699822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.699958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.700156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.700174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.700317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.700402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.700415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.700535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.700657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.700670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.700859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.701052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.701067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.701188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.701368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.701381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.701630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.701840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.701853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.702103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.702299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.702312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.702517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.702635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.702648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.702859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.703042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.703054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.703172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.703358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.703371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.350 [2024-07-11 14:02:38.703616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.703761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.350 [2024-07-11 14:02:38.703774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.350 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.703958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.704130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.704143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.704234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.704382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.704395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.704672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.704796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.704812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.705010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.705127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.705143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.705271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.705452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.705465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.705733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.705847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.705857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.705990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.706127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.706139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.706330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.706493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.706503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.706610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.706717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.706727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.706842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.706955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.706964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.707155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.707344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.707354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.707538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.707716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.707726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.707834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.707954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.707964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.708151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.708278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.708288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.708463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.708581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.708591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.708760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.708938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.708948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.709066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.709176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.709186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.709353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.709468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.709477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.709594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.709760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.709770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.709937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.710065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.710075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.710183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.710305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.710315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.710509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.710632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.710641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.710824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.711014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.711024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.711242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.711420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.711430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.711608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.711781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.711791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.711969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.712137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.712147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.712268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.712371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.712380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.712579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.712803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.712814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.713008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.713122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.713131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.713251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.713424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.713434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.351 qpair failed and we were unable to recover it. 00:32:36.351 [2024-07-11 14:02:38.713569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.713691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.351 [2024-07-11 14:02:38.713701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.713883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.714008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.714018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.714130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.714338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.714347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.714518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.714626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.714635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.714805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.714986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.714995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.715275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.715445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.715455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.715564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.715689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.715698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.715771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.715904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.715913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.716026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.716140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.716149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.716256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.716357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.716367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.716554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.716654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.716663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.716773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.716880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.716890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.717061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.717228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.717237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.717356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.717466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.717475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.717572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.717825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.717834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.717924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.718018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.718027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.718167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.718287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.718296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.718466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.718590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.718599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.718718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.718891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.718900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.719031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.719132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.719141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.719318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.719529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.719539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.719723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.719837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.719846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.719975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.720128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.720138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.720250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.720358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.720367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.720484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.720598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.720608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.720721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.720953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.720962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.721132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.721237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.721247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.721424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.721530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.721540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.721622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.721788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.721797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.722050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.722163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.722172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.722275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.722440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.722449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.352 qpair failed and we were unable to recover it. 00:32:36.352 [2024-07-11 14:02:38.722620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.352 [2024-07-11 14:02:38.722733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.722743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.722934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.723096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.723105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.723225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.723472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.723482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.723697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.723792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.723801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.723980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.724099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.724108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.724346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.724611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.724621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.724835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.724962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.724971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.725074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.725176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.725186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.725356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.725472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.725482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.725694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.725781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.725791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.725981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.726213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.726223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.726342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.726519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.726528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.726644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.726899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.726909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.727119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.727238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.727247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.727439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.727662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.727671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.727778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.727941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.727950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.728117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.728218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.728227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.728417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.728518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.728527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.728660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.728802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.728811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.728985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.729101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.729110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.729226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.729406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.729415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.729521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.729634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.729643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.729741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.729904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.729914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.730082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.730248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.730258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.730367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.730562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.730571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.730670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.730784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.730793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.730974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.731074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.731084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.731195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.731298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.731308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.731475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.731644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.731653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.731830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.731928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.731938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.732107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.732297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.353 [2024-07-11 14:02:38.732306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.353 qpair failed and we were unable to recover it. 00:32:36.353 [2024-07-11 14:02:38.732417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.732610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.732619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.732823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.732945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.732957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.733130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.733367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.733377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.733618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.733817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.733826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.733924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.734155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.734167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.734354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.734464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.734473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.734645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.734748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.734757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.734890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.735062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.735071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.735193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.735360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.735369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.735545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.735709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.735718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.735931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.736044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.736054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.736238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.736411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.736422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.736673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.736855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.736864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.736964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.737131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.737141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.737260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.737447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.737457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.737690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.737800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.737809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.737990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.738168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.738178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.738360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.738490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.738499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.738665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.738847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.738857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.739111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.739219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.739229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.739410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.739517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.739526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.739722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.739950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.739961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.740080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.740193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.740203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.740323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.740419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.740429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.740554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.740712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.740722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.740886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.740989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.740998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.741194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.741459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.741468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.741654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.741764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.741773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.741894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.742014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.742024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.742278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.742445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.742455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.354 qpair failed and we were unable to recover it. 00:32:36.354 [2024-07-11 14:02:38.742569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.742727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.354 [2024-07-11 14:02:38.742736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.742916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.743094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.743105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.743314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.743562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.743571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.743806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.743976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.743986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.744096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.744217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.744227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.744410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.744598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.744608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.744787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.744954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.744963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.745175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.745315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.745324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.745421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.745642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.745651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.745760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.745836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.745845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.746101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.746332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.746342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.746509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.746684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.746694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.746860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.747026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.747035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.747149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.747388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.747398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.747517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.747761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.747770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.747888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.747990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.747999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.748118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.748220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.748230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.748328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.748520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.748529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.748704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.748797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.748807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.748974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.749155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.749169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.749287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.749469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.749478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.749605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.749717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.749727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.749913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.750087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.750097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.750274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.750470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.750480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.750717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.750829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.750839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.751031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.751166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.751175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.751285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.751523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.751533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.355 [2024-07-11 14:02:38.751668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.751918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.355 [2024-07-11 14:02:38.751928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.355 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.752044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.752257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.752266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.752506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.752603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.752612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.752777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.752961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.752970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.753047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.753233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.753242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.753412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.753497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.753506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.753680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.753914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.753923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.754039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.754203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.754213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.754446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.754589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.754598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.754776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.754872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.754881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.755007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.755192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.755201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.755388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.755526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.755535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.755732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.755906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.755915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.755988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.756095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.756104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.756346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.756597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.756606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.756841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.757094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.757103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.757301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.757485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.757494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.757613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.757787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.757796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.758050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.758157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.758170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.758338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.758503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.758512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.758745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.758936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.758946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.759070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.759259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.759269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.759435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.759566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.759576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.759811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.759932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.759941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.760174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.760351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.760360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.760535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.760717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.760726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.760956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.761135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.761144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.761290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.761455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.761464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.761564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.761686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.761695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.761864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.762029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.762038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.762270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.762435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.762445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.356 qpair failed and we were unable to recover it. 00:32:36.356 [2024-07-11 14:02:38.762540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.762703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.356 [2024-07-11 14:02:38.762712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.762800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.763081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.763090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.763359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.763478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.763487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.763589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.763702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.763712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.763883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.764013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.764022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.764192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.764305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.764314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.764439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.764619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.764628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.764882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.764993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.765003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.765196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.765335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.765344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.765453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.765557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.765567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.765757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.765993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.766002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.766183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.766282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.766291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.766486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.766614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.766624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.766710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.766936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.766945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.767144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.767279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.767288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.767475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.767588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.767597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.767840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.768114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.768123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.768360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.768494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.768503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.768755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.768932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.768942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.769129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.769325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.769334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.769447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.769641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.769650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.769836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.770091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.770101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.770220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.770400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.770410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.770605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.770699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.770709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.770968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.771088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.771097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.771283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.771468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.771477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.771594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.771875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.771885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.772006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.772085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.772094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.772227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.772339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.772350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.772671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.772837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.772846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.773016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.773198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.773208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.357 qpair failed and we were unable to recover it. 00:32:36.357 [2024-07-11 14:02:38.773443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.357 [2024-07-11 14:02:38.773563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.358 [2024-07-11 14:02:38.773572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.358 qpair failed and we were unable to recover it. 00:32:36.358 [2024-07-11 14:02:38.773709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.358 [2024-07-11 14:02:38.773886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.358 [2024-07-11 14:02:38.773896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.358 qpair failed and we were unable to recover it. 00:32:36.358 [2024-07-11 14:02:38.774008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.358 [2024-07-11 14:02:38.774091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.358 [2024-07-11 14:02:38.774101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.358 qpair failed and we were unable to recover it. 00:32:36.358 [2024-07-11 14:02:38.774278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.358 [2024-07-11 14:02:38.774461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.358 [2024-07-11 14:02:38.774471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.358 qpair failed and we were unable to recover it. 00:32:36.358 [2024-07-11 14:02:38.774646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.358 [2024-07-11 14:02:38.774816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.358 [2024-07-11 14:02:38.774826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.358 qpair failed and we were unable to recover it. 00:32:36.358 [2024-07-11 14:02:38.774962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.358 [2024-07-11 14:02:38.775211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.358 [2024-07-11 14:02:38.775221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.358 qpair failed and we were unable to recover it. 00:32:36.358 [2024-07-11 14:02:38.775381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.775565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.775579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.775693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.775876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.775886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.776072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.776242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.776253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.776425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.776606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.776616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.776737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.776935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.776945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.777128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.777309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.777319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.777558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.777796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.777807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.777976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.778181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.778191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.778300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.778435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.778445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.778558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.778744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.778755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.778949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.779064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.779074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.779187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.779296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.779305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.779425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.779589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.779599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.779755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.779877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.779886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.780058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.780171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.780181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.780306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.780422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.690 [2024-07-11 14:02:38.780431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.690 qpair failed and we were unable to recover it. 00:32:36.690 [2024-07-11 14:02:38.780595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.780766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.780775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.780947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.781179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.781191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.781358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.781459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.781469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.781597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.781705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.781716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.781887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.782002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.782012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.782124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.782240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.782250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.782430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.782543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.782553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.782739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.782842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.782852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.782973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.783213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.783223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.783400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.783498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.783508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.783683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.783890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.783900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.784008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.784119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.784129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.784316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.784489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.784499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.784668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.784909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.784918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.785180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.785359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.785369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.785488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.785693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.785702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.785899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.786079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.786089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.786219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.786428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.786437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.786620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.786788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.786797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.786980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.787145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.787155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.787277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.787443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.787453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.787669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.787854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.787865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.788002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.788198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.788208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.788345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.788440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.788449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.788629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.788795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.788804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.788969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.789099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.789109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.789225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.789387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.789396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.789503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.789625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.789635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.789744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.789866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.691 [2024-07-11 14:02:38.789875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.691 qpair failed and we were unable to recover it. 00:32:36.691 [2024-07-11 14:02:38.790062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.790178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.790188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.790400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.790564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.790574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.790690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.790874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.790886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.790983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.791089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.791098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.791228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.791463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.791473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.791665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.791775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.791784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.791968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.792203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.792213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.792448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.792550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.792559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.792727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.792838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.792849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.793123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.793245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.793254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.793367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.793542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.793551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.793738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.793833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.793843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.793961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.794223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.794234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.794332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.794442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.794452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.794691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.794869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.794879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.794979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.795152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.795171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.795363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.795532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.795541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.795666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.795830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.795840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.796008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.796190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.796200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.796406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.796581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.796591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.796730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.796813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.796823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.797057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.797184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.797194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.797376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.797474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.797486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.797589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.797758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.797768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.797946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.798106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.798116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.798313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.798428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.692 [2024-07-11 14:02:38.798438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.692 qpair failed and we were unable to recover it. 00:32:36.692 [2024-07-11 14:02:38.798631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.798781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.798791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.798971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.799074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.799083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.799288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.799454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.799464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.799658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.799868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.799877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.800083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.800265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.800275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.800381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.800570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.800580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.800785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.801000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.801011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.801219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.801457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.801467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.801633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.801888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.801898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.802018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.802302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.802312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.802435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.802547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.802557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.802746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.802929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.802939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.803222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.803330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.803340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.803447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.803562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.803572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.803779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.803897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.803907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.804070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.804183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.804194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.804360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.804492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.804501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.804687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.804866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.804876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.805033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.805207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.805217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.805486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.805593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.805604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.805734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.805972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.805982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.806111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.806347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.806358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.806467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.806700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.806709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.806825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.807028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.807039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.807208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.807408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.807419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.807525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.807673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.807684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.807871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.808072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.808082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.808298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.808465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.808475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.808623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.808808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.693 [2024-07-11 14:02:38.808818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.693 qpair failed and we were unable to recover it. 00:32:36.693 [2024-07-11 14:02:38.808984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.809048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.809058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.809228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.809464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.809474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.809640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.809886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.809897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.810002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.810129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.810139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.810253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.810370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.810379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.810627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.810886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.810896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.811012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.811182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.811192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.811366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.811495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.811504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.811679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.811775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.811785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.811888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.812001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.812011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.812189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.812274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.812283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.812398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.812503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.812512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.812625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.812735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.812753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.812924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.813060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.813070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.813251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.813340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.813350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.813456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.813572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.813582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.813746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.813855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.813865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.813985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.814086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.814095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.814221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.814387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.814397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.814567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.814682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.814692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.814858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.814958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.814968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.815146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.815266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.815276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.815386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.815497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.815507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.815631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.815747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.815757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.815875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.816052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.816062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.816141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.816314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.816324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.816570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.816689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.816699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.694 qpair failed and we were unable to recover it. 00:32:36.694 [2024-07-11 14:02:38.816880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.816974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.694 [2024-07-11 14:02:38.816984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.817221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.817407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.817417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.817532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.817718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.817727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.817911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.818011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.818020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.818137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.818308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.818318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.818418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.818520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.818530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.818627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.818800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.818810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.818935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.819052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.819062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.819246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.819355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.819365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.819439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.819607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.819618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.819787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.819908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.819918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.820028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.820211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.820221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.820337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.820444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.820454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.820559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.820671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.820681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.820791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.820888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.820897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.821010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.821122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.821131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.821314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.821423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.821434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.821563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.821693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.821702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.821837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.821937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.821946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.822117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.822285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.822296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.822394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.822574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.822585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.822823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.823072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.823090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.823232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.823430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.823447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.823582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.823709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.823722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.823828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.823945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.823958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.824054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.824174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.824189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.824306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.824510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.824524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.695 [2024-07-11 14:02:38.824710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.824894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.695 [2024-07-11 14:02:38.824907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.695 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.825094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.825180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.825193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.825342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.825449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.825463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.825637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.825861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.825875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.826059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.826186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.826200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.826377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.826498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.826511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.826621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.826800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.826813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.826992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.827180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.827193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.827382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.827505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.827518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.827709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.827811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.827824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.827949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.828125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.828139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.828263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.828403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.828416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.828604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.828722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.828735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.828848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.828971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.828984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.829108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.829290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.829304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.829438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.829543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.829556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.829699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.829905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.829919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.830043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.830167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.830181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.830301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.830438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.830451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.830590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.830700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.830713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.830840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.831010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.831025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.831139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.831261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.831275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.831383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.831499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.831512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.831632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.831747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.831759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.831880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.832067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.832081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.832258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.832380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.832393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.832570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.832687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.832701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.832894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.833121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.833136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.833257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.833363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.833376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.833576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.833699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.833712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.833909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.834013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.834026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.696 qpair failed and we were unable to recover it. 00:32:36.696 [2024-07-11 14:02:38.834139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.834271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.696 [2024-07-11 14:02:38.834285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.834405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.834585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.834600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.834756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.834870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.834883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.835008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.835126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.835142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.835276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.835453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.835466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.835649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.835769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.835783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.835981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.836095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.836108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.836288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.836410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.836424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.836518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.836661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.836676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.836858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.836976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.836991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.837158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.837286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.837300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.837415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.837529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.837542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.837639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.837807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.837820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.838092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.838219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.838236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.838352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.838473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.838486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.838589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.838705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.838719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.838824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.838996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.839009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.839183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.839336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.839348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.839467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.839581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.839594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.839740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.839924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.839937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.840096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.840212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.840226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.840333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.840451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.840465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.840641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.840826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.840839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.840951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.841067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.841083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.841266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.841376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.841390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.841487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.841611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.841624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.841731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.841852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.841865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.841959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.842076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.842089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.842352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.842546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.842560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.842724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.842807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.842819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.843061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.843192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.843206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.697 qpair failed and we were unable to recover it. 00:32:36.697 [2024-07-11 14:02:38.843384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.697 [2024-07-11 14:02:38.843540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.843552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.843669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.843789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.843801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.843995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.844110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.844126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.844252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.844390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.844403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.844597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.844711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.844724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.844862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.844967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.844979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.845094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.845181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.845194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.845382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.845511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.845524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.845655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.845774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.845788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.845989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.846179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.846193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.846368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.846498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.846512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.846685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.846926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.846939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.847045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.847166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.847179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.847297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.847497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.847510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.847616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.847730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.847743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.847899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.848073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.848087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.848273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.848382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.848396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.848522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.848624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.848637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.848748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.848955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.848968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.849075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.849256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.849269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.849462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.849589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.849603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.849816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.850001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.850014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.850203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.850324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.850337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.850513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.850744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.850757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.850876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.850989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.851003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.851124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.851299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.851312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.851436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.851554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.851570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.851684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.851791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.851805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.851928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.852076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.852090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.852284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.852497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.852510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.852628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.852755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.698 [2024-07-11 14:02:38.852768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.698 qpair failed and we were unable to recover it. 00:32:36.698 [2024-07-11 14:02:38.852943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.853070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.853083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.853219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.853418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.853431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.853547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.853668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.853681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.853811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.853931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.853944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.854078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.854256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.854270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.854445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.854555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.854569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.854683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.854799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.854813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.854926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.855053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.855067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.855193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.855327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.855341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.855528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.855721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.855735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.855848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.855949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.855962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.856073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.856199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.856213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.856331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.856454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.856468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.856598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.856725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.856738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.856841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.856956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.856968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.857122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.857316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.857329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.857561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.857681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.857694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.857813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.858003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.858016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.858191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.858296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.858308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.858431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.858603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.858617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.858702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.858818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.858831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.858967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.859078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.859090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.859206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.859388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.859401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.859506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.859681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.859695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.859873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.859975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.859987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.860248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.860369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.860382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.860503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.860609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.699 [2024-07-11 14:02:38.860623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.699 qpair failed and we were unable to recover it. 00:32:36.699 [2024-07-11 14:02:38.860813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.860987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.861001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.861201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.861310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.861323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.861452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.861581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.861594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.861701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.861788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.861800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.861978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.862156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.862187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.862310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.862437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.862452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.862560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.862683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.862698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.862776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.863038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.863052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.863175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.863285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.863299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.863486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.863606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.863619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.863739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.863861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.863874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.864003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.864112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.864126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.864236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.864408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.864422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.864540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.864799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.864813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.864946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.865064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.865079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.865205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.865406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.865422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.865554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.865674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.865687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.865882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.865993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.866006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.866125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.866248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.866262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.866387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.866489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.866503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.866710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.866887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.866900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.867078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.867213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.867227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.867332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.867437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.867450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.867669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.867745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.867757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.867933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.868186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.868200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.868381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.868503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.868514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.868698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.868884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.868894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.869076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.869180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.869191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.869372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.869474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.869484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.869700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.869825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.869834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.700 [2024-07-11 14:02:38.869955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.870212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.700 [2024-07-11 14:02:38.870222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.700 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.870342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.870482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.870492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.870673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.870842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.870851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.871116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.871343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.871354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.871525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.871712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.871721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.871833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.872000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.872011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.872180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.872291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.872300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.872478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.872579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.872588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.872768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.872872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.872881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.873008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.873122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.873131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.873304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.873551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.873562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.873734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.873850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.873859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.873970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.874085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.874094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.874204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.874371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.874380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.874479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.874593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.874602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.874785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.874987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.874997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.875106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.875210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.875220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.875340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.875463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.875472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.875747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.875915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.875925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.876135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.876260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.876270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.876384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.876645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.876654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.876778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.877018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.877028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.877149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.877322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.877332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.877520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.877633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.877642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.877747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.877849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.877858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.877971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.878154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.878168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.878283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.878450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.878460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.878570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.878672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.878681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.878797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.878983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.878992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.879108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.879284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.879294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.879389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.879553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.879562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.879647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.879817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.879827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.880015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.880231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.880240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.701 qpair failed and we were unable to recover it. 00:32:36.701 [2024-07-11 14:02:38.880351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.880533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.701 [2024-07-11 14:02:38.880542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.880668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.880759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.880768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.880851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.881040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.881050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.881184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.881303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.881312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.881485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.881656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.881665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.881776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.881946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.881956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.882151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.882252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.882261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.882343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.882452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.882461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.882640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.882767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.882776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.882841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.882951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.882961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.883080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.883191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.883201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.883311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.883421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.883431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.883630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.883743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.883755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.883858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.883956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.883965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.884132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.884331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.884341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.884453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.884563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.884572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.884679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.884847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.884857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.884968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.885155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.885167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.885337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.885435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.885444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.885553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.885657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.885666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.885950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.886070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.886080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.886172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.886352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.886361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.886468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.886575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.886587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.886696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.886812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.886822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.886992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.887092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.887102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.887225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.887323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.887332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.887428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.887600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.887610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.887679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.887789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.887799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.887908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.888172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.888182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.888295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.888397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.888406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.888516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.888631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.888641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.888809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.888976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.888985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.889090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.889267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.889278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.702 [2024-07-11 14:02:38.889452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.889648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.702 [2024-07-11 14:02:38.889659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.702 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.889786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.889909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.889919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.890028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.890201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.890211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.890313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.890418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.890427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.890614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.890723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.890732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.890834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.890948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.890957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.891066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.891243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.891253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.891489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.891595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.891606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.891705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.891775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.891786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.891891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.892008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.892019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.892199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.892319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.892329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.892498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.892607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.892618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.892737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.892865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.892875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.892989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.893089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.893099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.893199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.893300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.893309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.893414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.893512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.893521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.893640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.893750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.893759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.893869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.893986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.893995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.894113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.894220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.894230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.894299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.894394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.894404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.894491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.894613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.894624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.894796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.894907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.894917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.895080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.895183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.895193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.895313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.895506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.895516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.895682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.895787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.895796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.895979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.896111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.896121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.896248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.896368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.896377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.896486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.896597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.896606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.896701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.896878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.896887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.703 [2024-07-11 14:02:38.896993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.897174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.703 [2024-07-11 14:02:38.897184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.703 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.897351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.897518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.897528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.897657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.897834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.897844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.898037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.898195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.898204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.898311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.898414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.898424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.898527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.898648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.898656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.898775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.898891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.898901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.899005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.899103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.899112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.899234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.899341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.899350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.899459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.899621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.899631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.899816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.899925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.899934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.900054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.900158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.900170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.900358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.900473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.900482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.900645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.900769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.900778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.900880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.900986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.900996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.901100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.901300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.901310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.901426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.901527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.901537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.901640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.901745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.901755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.901860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.901952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.901961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.902060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.902231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.902241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.902439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.902536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.902545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.902654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.902720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.902729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.902838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.903076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.903086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.903253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.903353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.903368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.903543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.903709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.903718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.903820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.903979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.903988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.904166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.904335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.904344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.904489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.904657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.904667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.904775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.904878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.904888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.905005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.905109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.905119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.905299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.905414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.905426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.905533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.905793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.905803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.905904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.906042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.906051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.906153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.906261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.906270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.704 qpair failed and we were unable to recover it. 00:32:36.704 [2024-07-11 14:02:38.906368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.704 [2024-07-11 14:02:38.906464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.906474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.906571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.906749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.906760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.906927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.907126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.907136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.907315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.907417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.907428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.907532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.907645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.907654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.907764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.907934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.907945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.908058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.908179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.908189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.908336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.908452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.908461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.908573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.908671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.908680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.908798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.908916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.908925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.909039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.909169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.909179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.909306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.909408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.909417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.909489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.909605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.909614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.909712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.909807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.909818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.910011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.910122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.910131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.910235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.910327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.910337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.910432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.910536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.910546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.910695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.910828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.910845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.910981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.911096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.911110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.911222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.911398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.911412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.911605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.911720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.911733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.911837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.911949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.911962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.912085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.912206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.912220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.912337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.912594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.912607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.912724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.912828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.912841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.912951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.913075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.913088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.913209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.913362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.913374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.913579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.913720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.913737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.913860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.913987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.914001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.914111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.914236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.914251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.914372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.914561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.914575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.914688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.914801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.914814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.914942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.915120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.915133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.705 [2024-07-11 14:02:38.915266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.915399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.705 [2024-07-11 14:02:38.915412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.705 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.915615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.915728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.915741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.915847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.915957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.915970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.916098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.916225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.916238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.916351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.916530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.916543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.916668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.916840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.916852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.916975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.917098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.917111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.917238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.917364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.917376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.917556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.917741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.917755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.917867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.918046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.918060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.918191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.918309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.918322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.918440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.918614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.918627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.918759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.918886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.918898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.919032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.919206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.919219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.919335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.919414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.919429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.919551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.919693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.919706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.919854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.919969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.919981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.920224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.920301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.920314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.920404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.920510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.920523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.920640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.920821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.920834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.920949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.921063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.921076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.921174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.921283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.921297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.921510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.921700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.921713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.921820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.921995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.922008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.922189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.922373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.922386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.922518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.922636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.922649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.922761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.922880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.922894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.923010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.923189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.923203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.923325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.923445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.923457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.923565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.923791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.923804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.923915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.924026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.924039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.924149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.924344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.924357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.924536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.924716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.924729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.924854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.924964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.924977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.925110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.925229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.925243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.925356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.925461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.925473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.706 qpair failed and we were unable to recover it. 00:32:36.706 [2024-07-11 14:02:38.925582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.925700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.706 [2024-07-11 14:02:38.925713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.925838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.925948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.925961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.926091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.926230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.926245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.926381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.926572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.926585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.926698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.926884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.926898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.927020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.927136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.927149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.927272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.927472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.927485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.927594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.927707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.927719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.927841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.927946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.927958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.928143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.928265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.928280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.928473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.928592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.928606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.928719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.928832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.928845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.929030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.929189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.929202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.929399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.929580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.929593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.929711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.929882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.929895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.930087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.930207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.930220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.930334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.930470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.930483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.930651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.930778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.930791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.930991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.931123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.931135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.931231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.931369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.931382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.931508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.931768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.931781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.931895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.932032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.932045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.932174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.932353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.932367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.932558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.932693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.932705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.932918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.933040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.933053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.933175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.933354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.933366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.933545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.933724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.933737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.933863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.933939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.933952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.934149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.934333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.934346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.934547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.934741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.934754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.934945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.935077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.935090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.935214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.935339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.935352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.707 qpair failed and we were unable to recover it. 00:32:36.707 [2024-07-11 14:02:38.935477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.707 [2024-07-11 14:02:38.935650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.935663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.935775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.935971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.935984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.936201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.936317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.936330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.936518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.936707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.936720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.936854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.937035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.937048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.937183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.937292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.937305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.937435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.937589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.937603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.937716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.937827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.937840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.937982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.938104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.938117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.938248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.938386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.938399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.938578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.938712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.938724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.938996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.939117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.939130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.939306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.939423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.939435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.939556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.939671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.939684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.939796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.940010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.940023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.940219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.940421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.940434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.940567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.940683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.940696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.940966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.941180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.941193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.941415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.941531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.941544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.941736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.941863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.941876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.941987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.942097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.942110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.942245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.942359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.942372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.942563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.942739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.942752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.943000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.943079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.943093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.943291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.943424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.943437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.943541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.943646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.943659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.943766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.943887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.943900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.943981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.944109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.944124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.944300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.944419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.944432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.944554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.944787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.944800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.944916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.945094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.945107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.945237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.945421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.945433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.945627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.945825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.945838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.945955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.946151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.946167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.946353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.946477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.946489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.946596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.946843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.946856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.708 qpair failed and we were unable to recover it. 00:32:36.708 [2024-07-11 14:02:38.947056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.708 [2024-07-11 14:02:38.947230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.947244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.947365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.947525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.947540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.947715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.947957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.947970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.948092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.948201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.948214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.948393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.948482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.948495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.948687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.948801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.948814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.948992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.949102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.949114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.949331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.949451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.949464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.949580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.949756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.949769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.949939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.950078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.950091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.950216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.950333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.950346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.950585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.950706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.950721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.950841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.950961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.950973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.951070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.951247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.951260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.951364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.951481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.951494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.951612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.951800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.951814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.951957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.952171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.952185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.952302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.952504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.952518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.952634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.952737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.952749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.952877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.953014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.953027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.953238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.953357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.953369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.953505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.953635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.953650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.953837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.954011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.954024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.954205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.954322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.954335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.954532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.954729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.954742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.954851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.954962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.954975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.955096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.955214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.955227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.955337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.955414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.955426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.955541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.955732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.955744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.955931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.956049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.956062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.956220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.956459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.956472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.956666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.956767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.956779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.956940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.957048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.957061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.957236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.957346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.957358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.957520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.957654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.957667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.957857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.957973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.957987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.958072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.958188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.958202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.709 [2024-07-11 14:02:38.958311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.958449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.709 [2024-07-11 14:02:38.958462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.709 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.958703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.958827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.958840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.958945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.959132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.959145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.959263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.959374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.959387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.959502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.959628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.959641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.959747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.959942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.959955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.960138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.960328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.960342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.960478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.960588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.960601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.960719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.960826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.960838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.960944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.961065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.961078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.961189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.961370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.961383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.961513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.961630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.961643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.961754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.961877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.961889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.962113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.962303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.962317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.962562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.962684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.962698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.962833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.962955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.962967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.963140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.963277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.963290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.963467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.963584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.963597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.963787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.963956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.963969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.964165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.964299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.964311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.964489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.964600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.964613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.964873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.965076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.965088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.965230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.965494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.965506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.965685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.965872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.965885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.965988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.966100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.966113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.966225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.966345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.966357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.966540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.966643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.966656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.966768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.966886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.966898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.967014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.967198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.967212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.967345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.967583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.967597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.967742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.967857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.967870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.967965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.968036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.968049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.968152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.968296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.968309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.968420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.968589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.968602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.968685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.968895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.968909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.710 qpair failed and we were unable to recover it. 00:32:36.710 [2024-07-11 14:02:38.969023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.710 [2024-07-11 14:02:38.969146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.969164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.969276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.969401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.969415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.969605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.969736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.969749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.969853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.969970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.969983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.970180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.970284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.970297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.970408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.970581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.970593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.970769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.970879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.970893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.971004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.971111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.971124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.971255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.971373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.971386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.971509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.971624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.971637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.971749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.971854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.971867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.972022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.972223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.972236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.972424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.972552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.972565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.972686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.972797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.972810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.972956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.973062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.973075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.973186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.973281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.973295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.973488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.973599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.973612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.973796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.973950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.973963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.974101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.974253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.974267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.974380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.974484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.974496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.974626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.974733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.974743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.974925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.975032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.975041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.975142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.975283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.975292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.975431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.975539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.975548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.975720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.975899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.975908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.976018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.976201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.976211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.976397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.976572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.976582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.976678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.976870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.976880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.977009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.977115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.977125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.977297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.977532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.977542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.977691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.977802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.977818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.977945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.978048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.978061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.978201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.978293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.978306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.978436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.978616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.978629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.978758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.978938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.978951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.979132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.979360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.979374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.979572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.979706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.979720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.979845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.979960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.979973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.980100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.980222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.711 [2024-07-11 14:02:38.980238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.711 qpair failed and we were unable to recover it. 00:32:36.711 [2024-07-11 14:02:38.980357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.980480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.980494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.980611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.980744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.980758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.980874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.980998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.981011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.981203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.981326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.981339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.981450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.981561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.981574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.981680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.981856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.981869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.981988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.982170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.982184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.982313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.982418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.982431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.982538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.982614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.982627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.982833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.982915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.982928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.983037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.983180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.983194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.983397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.983526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.983539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.983641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.983764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.983777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.983901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.984008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.984021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.984130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.984329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.984343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.984421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.984602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.984616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.984731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.984837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.984850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.985042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.985148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.985165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.985351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.985482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.985495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.985678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.985855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.985868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.985979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.986087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.986099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.986280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.986424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.986439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.986552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.986725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.986739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.986849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.986936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.986948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.987073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.987191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.987204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.987386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.987586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.987599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.987730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.987844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.987858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.988054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.988197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.988211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.988332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.988440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.988453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.988553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.988658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.988671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.988796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.988972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.988985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.989097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.989193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.989207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.989314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.989435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.989449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.989573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.989746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.989761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.989894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.989995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.990008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.990124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.990254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.990268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.990394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.990510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.990522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.990637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.990819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.990833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.712 [2024-07-11 14:02:38.991017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.991193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.712 [2024-07-11 14:02:38.991207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.712 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.991391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.991564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.991578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.991704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.991891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.991904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.992016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.992146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.992167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.992281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.992365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.992378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.992562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.992749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.992762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.992870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.992988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.993002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.993123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.993232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.993245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.993366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.993476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.993488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.993630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.993810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.993824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.993937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.994048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.994063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.994248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.994367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.994380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.994469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.994591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.994604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.994723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.994849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.994865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.994985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.995104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.995117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.995325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.995447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.995459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.995650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.995770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.995784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.995962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.996143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.996156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.996278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.996406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.996419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.996527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.996637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.996651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.996778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.996963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.996976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.997184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.997309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.997326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.997439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.997549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.997563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.997768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.997894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.997910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.998018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.998123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.998136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.998342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.998459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.998473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.998592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.998692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.998705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.998883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.999002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.999016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.999197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.999315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.999328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.999627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.999842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:38.999856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:38.999980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.000066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.000078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:39.000287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.000471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.000484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:39.000606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.000726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.000740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:39.000867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.001007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.001023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:39.001293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.001481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.001493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:39.001604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.001793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.001807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:39.001990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.002121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.002133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:39.002334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.002449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.002462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:39.002635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.002753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.002766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.713 qpair failed and we were unable to recover it. 00:32:36.713 [2024-07-11 14:02:39.002977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.713 [2024-07-11 14:02:39.003102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.003115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.003299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.003494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.003507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.003625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.003717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.003731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.003842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.003989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.004002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.004124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.004233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.004246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.004372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.004492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.004504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.004687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.004794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.004806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.004916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.005025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.005038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.005332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.005454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.005467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.005591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.005702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.005716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.005838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.005988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.006001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.006104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.006236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.006249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.006514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.006631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.006644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.006744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.006936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.006950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.007073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.007197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.007211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.007472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.007578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.007591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.007805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.007922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.007936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.008109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.008233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.008246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.008363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.008469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.008482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.008611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.008790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.008802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.008924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.009000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.009013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.009197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.009402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.009415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.009536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.009706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.009720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.009839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.009955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.009969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.010175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.010348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.010361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.010549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.010720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.010733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.010855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.010988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.011002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.011119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.011199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.011213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.011345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.011455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.011468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.011654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.011769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.011782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.011960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.012078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.012090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.012275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.012384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.012397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.012513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.012706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.012720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.012894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.013007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.013021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.013233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.013351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.013363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.013547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.013653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.714 [2024-07-11 14:02:39.013665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.714 qpair failed and we were unable to recover it. 00:32:36.714 [2024-07-11 14:02:39.013789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.013986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.013999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.014124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.014231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.014243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.014356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.014463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.014476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.014652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.014830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.014844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.014976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.015076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.015089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.015299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.015433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.015447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.015577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.015694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.015706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.015885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.016009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.016021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.016124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.016246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.016260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.016440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.016544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.016557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.016673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.016796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.016809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.017000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.017198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.017212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.017328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.017443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.017456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.017569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.017672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.017685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.017767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.017941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.017955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.018149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.018231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.018244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.018493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.018610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.018623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.018822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.018999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.019013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.019190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.019311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.019324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.019451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.019631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.019644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.019751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.019932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.019945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.020095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.020221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.020235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.020415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.020523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.020537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.020716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.020824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.020836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.020960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.021135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.021149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.021280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.021458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.021471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.021654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.021750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.021763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.021884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.022063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.022077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.022205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.022321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.022334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.022536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.022692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.022709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.022843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.023037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.023047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.023267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.023377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.023386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.023568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.023732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.023741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.023867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.024051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.024061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.024170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.024307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.024316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.024415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.024533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.024543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.024661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.024787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.024797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.024909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.025037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.025047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.025166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.025303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.025312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.025464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.025571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.025580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.025704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.025801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.025811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.715 qpair failed and we were unable to recover it. 00:32:36.715 [2024-07-11 14:02:39.025918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.715 [2024-07-11 14:02:39.026044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.026054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.026169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.026280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.026290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.026395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.026515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.026524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.026636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.026744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.026753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.026865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.027048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.027058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.027181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.027274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.027284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.027381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.027549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.027560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.027674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.027802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.027812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.027995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.028098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.028107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.028227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.028350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.028359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.028450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.028614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.028624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.028739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.028938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.028947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.029128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.029241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.029252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.029366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.029571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.029582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.029694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.029878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.029889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.030070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.030245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.030255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.030441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.030621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.030631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.030787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.030916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.030925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.031040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.031152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.031164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.031285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.031351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.031361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.031543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.031745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.031754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.031926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.032024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.032033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.032249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.032350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.032360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.032571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.032698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.032707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.032806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.032918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.032928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.033036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.033206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.033217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.033351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.033468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.033477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.033597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.033703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.033712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.033822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.033928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.033938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.034108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.034237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.034247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.034338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.034413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.034423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.034524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.034733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.034743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.034909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.035022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.035032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.035213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.035380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.035390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.035568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.035679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.035689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.035806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.035900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.035909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.036045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.036149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.036164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.036361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.036483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.036493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.036615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.036795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.036805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.036959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.037059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.037069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.037213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.037323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.037332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.037505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.037606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.037615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.037746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.037844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.716 [2024-07-11 14:02:39.037853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.716 qpair failed and we were unable to recover it. 00:32:36.716 [2024-07-11 14:02:39.037988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.038166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.038176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.038282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.038417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.038426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.038546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.038646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.038655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.038840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.038948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.038958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.039128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.039228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.039238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.039346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.039487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.039499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.039595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.039771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.039780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.039974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.040092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.040101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.040282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.040381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.040391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.040565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.040680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.040690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.040806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.041047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.041057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.041156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.041264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.041275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.041385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.041493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.041503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.041685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.041780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.041789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.041962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.042140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.042149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.042195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1421170 (9): Bad file descriptor 00:32:36.717 [2024-07-11 14:02:39.042401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.042563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.042580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.042726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.042850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.042864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.042953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.043088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.043101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.043221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.043391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.043403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.043517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.043694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.043708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.043874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.044020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.044033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.044141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.044263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.044277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.044387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.044574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.044588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.044678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.044794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.044806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.044926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.045104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.045119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.045249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.045367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.045380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.045500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.045625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.045638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.045755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.045866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.045879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.046056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.046184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.046197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.046308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.046446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.046460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.046581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.046751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.046765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.046875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.046994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.047007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.047182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.047304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.047318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.047427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.047544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.047557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.047675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.047795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.047808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.047923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.048050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.048065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.048180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.048291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.048303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.048479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.048556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.048579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.048755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.048872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.048885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.717 qpair failed and we were unable to recover it. 00:32:36.717 [2024-07-11 14:02:39.049060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.717 [2024-07-11 14:02:39.049196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.049209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.049315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.049487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.049501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.049574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.049746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.049758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.049874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.049995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.050008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.050130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.050237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.050250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.050356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.050471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.050486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.050598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.050706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.050720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.050835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.050939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.050952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.051081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.051202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.051216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.051353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.051461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.051475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.051583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.051699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.051714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.051842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.052028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.052042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.052239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.052346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.052359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.052483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.052658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.052671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.052784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.052961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.052974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.053108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.053224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.053239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.053347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.053469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.053487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.053613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.053796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.053809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.053933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.054043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.054056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.054247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.054348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.054361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.054473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.054593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.054607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.054784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.054967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.054980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.055098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.055285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.055298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.055407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.055504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.055517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.055633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.055759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.055772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.055879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.056055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.056068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.056195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.056319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.056334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.056614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.056745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.056759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.056865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.056983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.056996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.057153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.057272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.057286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.057408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.057515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.057528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.057636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.057747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.057760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.057890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.057989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.058003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.058119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.058230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.058243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.058353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.058625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.058638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.058816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.058932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.058946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.059128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.059203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.059220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.059342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.059529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.059542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.059662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.059783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.059796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.060000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.060112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.060126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.060265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.060381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.060395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.060503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.060615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.060630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.060739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.060848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.060861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.060940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.061065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.061078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.718 qpair failed and we were unable to recover it. 00:32:36.718 [2024-07-11 14:02:39.061220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.718 [2024-07-11 14:02:39.061402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.061415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.061661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.061838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.061851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.061959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.062076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.062091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.062209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.062318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.062332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.062444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.062552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.062565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.062715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.062893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.062906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.063029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.063218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.063232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.063413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.063554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.063567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.063688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.063805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.063818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.063957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.064078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.064091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.064213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.064307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.064320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.064446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.064627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.064641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.064826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.064936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.064949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.065131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.065259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.065273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.065399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.065584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.065598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.065733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.065906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.065919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.066025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.066135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.066148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf8000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.066267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.066410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.066423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.066600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.066796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.066809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.066925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.067034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.067047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.067178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.067299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.067313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.067435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.067555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.067569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.067746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.067989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.068002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.068137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.068272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.068286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.068496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.068673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.068686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.068809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.068936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.068949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.069055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.069166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.069180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.069363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.069487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.069500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.069619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.069735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.069748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.069928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.070030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.070044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.070171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.070306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.070320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.070435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.070546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.070560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.070739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.070911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.070925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.071057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.071166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.071180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.071357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.071488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.071501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.071623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.071720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.071733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.719 [2024-07-11 14:02:39.071856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.071973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.719 [2024-07-11 14:02:39.071987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.719 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.072114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.072233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.072248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.072372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.072545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.072558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.072758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.072835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.072849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.073044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.073176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.073201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.073298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.073473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.073486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.073613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.073806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.073819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.074023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.074223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.074237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.074399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.074519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.074532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.074650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.074720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.074733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.074914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.075026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.075039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.075151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.075334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.075348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.075477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.075594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.075607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.075724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.075901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.075914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.076129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.076260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.076274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.076470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.076586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.076598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.076717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.076940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.076953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.077074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.077216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.077237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.077348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.077518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.077532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.077656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.077756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.077770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.077918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.078061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.078074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.078199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.078303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.078316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.078438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.078620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.078633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.078756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.078870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.078883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.079177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.079253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.079267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.079375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.079495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.079508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.079763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.079882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.079895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.080011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.080138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.080152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.990 [2024-07-11 14:02:39.080361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.080500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.990 [2024-07-11 14:02:39.080513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.990 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.080627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.080802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.080816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.080945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.081207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.081222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.081331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.081508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.081521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.081633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.081752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.081765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.081952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.082061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.082075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.082194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.082396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.082410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.082558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.082678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.082692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.082873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.083008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.083022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.083144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.083402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.083414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.083535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.083722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.083733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.083836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.083954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.083964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.084094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.084201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.084211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.084326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.084492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.084503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.084612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.084710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.084719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.084839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.084952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.084969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.085069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.085190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.085202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.085462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.085571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.085581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.085754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.085860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.085870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.085972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.086048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.086058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.086182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.086312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.086323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.086508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.086687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.086697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.086814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.086941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.086951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.087132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.087242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.087252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.087421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.087601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.087611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.087730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.087845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.087856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.087921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.088015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.088026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.088211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.088394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.088405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.088524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.088653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.088664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.088828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.089006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.089017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.089149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.089224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.089234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.991 qpair failed and we were unable to recover it. 00:32:36.991 [2024-07-11 14:02:39.089332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.991 [2024-07-11 14:02:39.089445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.089456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.089562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.089660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.089671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.089849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.089973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.089984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.090094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.090208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.090221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.090348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.090483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.090493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.090713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.090820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.090830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.090938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.091045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.091056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.091124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.091230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.091240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.091373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.091521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.091531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.091701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.091813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.091823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.091938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.092041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.092051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.092166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.092283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.092292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.092416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.092600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.092610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.092790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.092892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.092903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.093097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.093217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.093228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.093345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.093454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.093464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.093566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.093685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.093696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.093809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.093976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.093987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.094093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.094270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.094281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.094401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.094565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.094575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.094675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.094750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.094760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.094886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.094984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.094994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.095131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.095305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.095315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.095435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.095543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.095553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.095659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.095761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.095772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.095946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.096111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.096122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.096358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.096538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.096548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.096686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.096785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.096794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.097031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.097152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.097179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.097277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.097406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.097415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.097524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.097712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.097730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.992 qpair failed and we were unable to recover it. 00:32:36.992 [2024-07-11 14:02:39.097903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.992 [2024-07-11 14:02:39.098012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.098022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.098126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.098219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.098229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.098432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.098566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.098576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.098673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.098887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.098898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.099037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.099122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.099140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.099232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.099341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.099353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.099594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.099717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.099727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.099844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.099950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.099961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.100150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.100261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.100272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.100386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.100554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.100564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.100732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.100849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.100860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.100953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.101167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.101178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.101276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.101455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.101466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.101582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.101692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.101702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.101817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.101993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.102002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.102112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.102269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.102278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.102381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.102499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.102509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.102625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.102751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.102763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.102875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.102969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.102978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.103092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.103211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.103221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.103352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.103528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.103540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.103663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.103768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.103779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.103848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.103958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.103968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.104173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.104292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.104301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.104405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.104587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.104597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.104713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.104904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.104914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.105102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.105220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.105230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.105338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.105433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.105447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.105558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.105676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.105686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.105779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.105884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.105894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.993 [2024-07-11 14:02:39.106142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.106266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.993 [2024-07-11 14:02:39.106277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.993 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.106379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.106480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.106491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.106605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.106688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.106698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.106783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.106893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.106903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.107042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.107168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.107178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.107432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.107549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.107560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.107661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.107779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.107789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.107900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.108118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.108130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.108228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.108404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.108414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.108534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.108709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.108720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.108902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.109068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.109079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.109254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.109419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.109429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.109541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.109650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.109660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.109771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.109880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.109891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.110082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.110197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.110207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.110335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.110439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.110450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.110538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.110649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.110659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.110769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.110938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.110950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.111128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.111207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.111217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.111375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.111492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.111502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.111608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.111713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.111723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.111835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.111941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.111951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.112191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.112304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.112314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.112505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.112765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.112776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.112970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.113057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.113066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.113229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.113361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.113370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.994 qpair failed and we were unable to recover it. 00:32:36.994 [2024-07-11 14:02:39.113497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.994 [2024-07-11 14:02:39.113616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.113626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.113796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.113912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.113921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.114010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.114182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.114192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.114316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.114411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.114421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.114527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.114638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.114648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.114763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.114881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.114891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.115024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.115140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.115151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.115255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.115370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.115380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.115483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.115584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.115594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.115765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.115937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.115946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.116027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.116144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.116154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.116268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.116459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.116469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.116660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.116848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.116859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.117044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.117145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.117154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.117377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.117483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.117493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.117599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.117758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.117768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.117935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.118054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.118064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.118242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.118354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.118366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.118465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.118584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.118594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.118696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.118806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.118816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.118928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.119125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.119135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.119249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.119358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.119370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.119539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.119672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.119681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.119850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.119948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.119958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.120069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.120170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.120181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.120281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.120393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.120404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.120573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.120685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.120695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.120800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.120919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.120930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.121030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.121205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.121216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.121327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.121426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.121435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.121552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.121726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.995 [2024-07-11 14:02:39.121737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.995 qpair failed and we were unable to recover it. 00:32:36.995 [2024-07-11 14:02:39.121841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.121943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.121954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.122057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.122153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.122168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.122274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.122392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.122402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.122517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.122688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.122699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.122799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.122909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.122919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.123005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.123176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.123185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.123291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.123399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.123408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.123521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.123728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.123739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.123808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.123981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.123991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.124082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.124153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.124166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.124270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.124487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.124496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.124679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.124797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.124807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.124909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.125011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.125020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.125145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.125331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.125342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.125458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.125528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.125538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.125782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.125962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.125971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.126179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.126289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.126298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.126405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.126531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.126540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.126659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.126791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.126812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.126992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.127123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.127132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.127249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.127347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.127356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.127529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.127642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.127651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.127779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.128004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.128014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.128184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.128290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.128299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.128499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.128642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.128652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.128839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.128936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.128946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.129058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.129239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.129249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.129350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.129529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.129539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.129663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.129851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.129860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.129984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.130084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.130094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.996 [2024-07-11 14:02:39.130230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.130335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.996 [2024-07-11 14:02:39.130343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.996 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.130513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.130616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.130625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.130803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.130915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.130925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.131109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.131207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.131217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.131331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.131442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.131452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.131628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.131758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.131767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.131882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.132065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.132075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.132188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.132331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.132341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.132452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.132614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.132623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.132732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.132804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.132813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.132990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.133106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.133115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.133289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.133463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.133473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.133680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.133845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.133854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.133970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.134091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.134100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.134203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.134319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.134329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.134507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.134679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.134688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.134801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.134906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.134915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.135025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.135118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.135127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.135314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.135436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.135446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.135543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.135679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.135688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.135857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.135958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.135967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.136157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.136263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.136273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.136387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.136503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.136512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.136681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.136853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.136862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.136974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.137085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.137094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.137264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.137441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.137450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.137546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.137797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.137807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.137919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.138021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.138030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.138139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.138239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.138248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.138348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.138521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.138531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.138607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.138708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.138718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.997 [2024-07-11 14:02:39.138828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.138993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.997 [2024-07-11 14:02:39.139003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.997 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.139108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.139276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.139286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.139408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.139596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.139605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.139731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.139847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.139857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.139970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.140069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.140078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.140251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.140351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.140360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.140535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.140617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.140626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.140812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.140978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.140987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.141103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.141267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.141276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.141376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.141539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.141549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.141724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.141894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.141904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.141995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.142103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.142113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.142277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.142373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.142383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.142561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.142690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.142700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.142879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.142991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.143000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.143101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.143354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.143365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.143469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.143584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.143593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.143761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.143937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.143947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.144046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.144156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.144168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.144372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.144494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.144504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.144693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.144861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.144873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.144992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.145128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.145138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.145259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.145546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.145555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.145667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.145776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.145786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.145889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.146060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.146070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.146181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.146294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.146304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.146407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.146517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.146527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.998 qpair failed and we were unable to recover it. 00:32:36.998 [2024-07-11 14:02:39.146699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.146872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.998 [2024-07-11 14:02:39.146882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.146990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.147240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.147250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.147349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.147463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.147472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.147633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.147755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.147768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.147873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.147970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.147979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.148094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.148199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.148208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.148334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.148433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.148442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.148566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.148668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.148677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.148819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.148935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.148945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.149056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.149170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.149180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.149292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.149397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.149407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.149510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.149622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.149631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.149816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.149981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.149991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.150098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.150270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.150282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.150382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.150475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.150485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.150585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.150811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.150822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.150928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.151042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.151051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.151154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.151258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.151267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.151447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.151613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.151622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.151808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.151911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.151920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.152019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.152102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.152112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.152286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.152386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.152394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.152484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.152577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.152586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.152692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.152787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.152797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.152896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.153009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.153018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.153199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.153321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.153330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.153437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.153607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.153617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.153717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 14:02:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:36.999 [2024-07-11 14:02:39.153870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.153881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.153979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 14:02:39 -- common/autotest_common.sh@852 -- # return 0 00:32:36.999 [2024-07-11 14:02:39.154152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.154167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 [2024-07-11 14:02:39.154337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.154446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.154456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:36.999 qpair failed and we were unable to recover it. 00:32:36.999 14:02:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:36.999 [2024-07-11 14:02:39.154561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.154672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:36.999 [2024-07-11 14:02:39.154681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 14:02:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:37.000 [2024-07-11 14:02:39.154812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.154920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.154929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 14:02:39 -- common/autotest_common.sh@10 -- # set +x 00:32:37.000 [2024-07-11 14:02:39.155040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.155153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.155179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.155269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.155381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.155391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.155498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.155599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.155608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.155721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.155897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.155908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.156005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.156122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.156132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.156249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.156378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.156388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.156593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.156700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.156710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.156812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.156991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.157001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.157105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.157204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.157215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.157324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.157424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.157434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.157590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.157698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.157708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.157814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.157916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.157927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.158043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.158144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.158153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.158345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.158455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.158464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.158579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.158806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.158816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.158920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.159050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.159060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.159248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.159346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.159356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.159474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.159593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.159602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.159703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.159872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.159882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.160068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.160201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.160211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.160329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.160440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.160450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.160548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.160759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.160768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.160831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.160913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.160922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.161050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.161188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.161198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.161282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.161486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.161496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.161745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.161859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.161870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.162008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.162178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.162189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.162357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.162546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.162558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.162662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.162765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.162775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.000 qpair failed and we were unable to recover it. 00:32:37.000 [2024-07-11 14:02:39.162954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.000 [2024-07-11 14:02:39.163072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.163082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.163266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.163363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.163373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.163547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.163651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.163661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.163929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.164049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.164059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.164183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.164299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.164310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.164427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.164605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.164615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.164789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.164974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.164984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.165173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.165272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.165282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.165460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.165565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.165574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.165675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.165838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.165849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.166103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.166361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.166371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.166536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.166770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.166779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.166906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.167023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.167033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.167147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.167332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.167343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.167537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.167649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.167658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.167772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.167883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.167892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.167988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.168100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.168109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.168238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.168404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.168413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.168595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.168724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.168734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.168913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.169080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.169089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.169202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.169314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.169325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.169440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.169536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.169545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.169652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.169767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.169778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.169884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.170005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.170014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.170127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.170300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.170311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.170427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.170532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.170543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.170660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.170842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.170851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.170952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.171153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.171168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.171335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.171446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.171456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.171540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.171645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.171655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.171768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.171865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.001 [2024-07-11 14:02:39.171874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.001 qpair failed and we were unable to recover it. 00:32:37.001 [2024-07-11 14:02:39.171990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.172099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.172108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.172272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.172388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.172398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.172496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.172662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.172672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.172830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.172945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.172954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.173058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.173225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.173235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.173342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.173467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.173478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.173584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.173693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.173703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.173808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.173924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.173934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.174092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.174208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.174218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.174344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.174542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.174552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.174662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.174789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.174799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.174916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.174988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.174999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.175117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.175226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.175236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.175341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.175403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.175412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.175528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.175625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.175635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.175714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.175820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.175830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.175998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.176096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.176106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.176218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.176386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.176396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.176513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.176632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.176642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.179405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.179512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.179521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.179645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.179760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.179772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.179984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.180114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.180126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.180252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.180372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.180381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.180487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.180603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.180614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.180781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.180892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.180901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.180998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.181115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.181124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.002 [2024-07-11 14:02:39.181252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.181373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.002 [2024-07-11 14:02:39.181382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.002 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.181496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.181665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.181675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.181803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.181968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.181977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.182098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.182272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.182283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.182395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.182496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.182506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.182741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.182853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.182865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.183032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.183223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.183233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.183377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.183487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.183497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.183593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.183696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.183705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.183826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.184000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.184009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.184146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.184297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.184307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.184435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.184541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.184550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.184661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.184766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.184775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.184906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.185003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.185014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.185119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.185244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.185256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.185357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.185542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.185556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.185674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.185781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.185791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.185898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.186077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.186087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.186201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.186313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.186323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.186422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.186517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.186527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.186640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.186754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.186764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.186887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.186985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.186994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.187096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.187222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.187232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.187347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.187448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.187459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.187629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.187736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.187745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.187912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.188026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.188035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.188148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.188264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.188274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.188438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.188537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.188546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.188650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.188758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.188768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.188884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.189000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.189010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.003 qpair failed and we were unable to recover it. 00:32:37.003 [2024-07-11 14:02:39.189150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.003 [2024-07-11 14:02:39.189297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.189307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.189474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 14:02:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.004 [2024-07-11 14:02:39.189597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.189608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.189803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 14:02:39 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:37.004 [2024-07-11 14:02:39.189907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.189919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.190043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.190148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.190163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 14:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.004 [2024-07-11 14:02:39.190239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.190412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.190422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 14:02:39 -- common/autotest_common.sh@10 -- # set +x 00:32:37.004 [2024-07-11 14:02:39.190541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.190677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.190690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.190876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.190995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.191007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.191131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.191271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.191285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.191472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.191593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.191606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.191725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.191900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.191913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.192021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.192141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.192154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.192271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.192458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.192471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.192585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.192692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.192705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.192807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.192909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.192922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.193095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.193223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.193236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.193346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.193470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.193483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.193599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.193707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.193719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.193847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.193965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.193978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.194174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.194331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.194344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.194454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.194572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.194586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.194701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.194913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.194925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.195117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.195307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.195320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.195494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.195614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.195626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.195753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.195932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.195945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.196066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.196203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.196217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.196353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.196466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.196479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.196608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.196719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.196732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.196869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.196989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.197001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.197111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.197220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.197233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.197347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.197458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.197470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.004 qpair failed and we were unable to recover it. 00:32:37.004 [2024-07-11 14:02:39.197666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.004 [2024-07-11 14:02:39.197774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.197787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.197906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.198097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.198110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.198296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.198421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.198434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.198552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.198666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.198679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.198790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.198964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.198976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d08000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.199164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.199273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.199282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.199537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.199644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.199653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.199753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.199870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.199880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.200000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.200102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.200111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.200219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.200328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.200338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.200443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.200542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.200552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.200658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.200868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.200878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.200957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.201077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.201086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.201188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.201303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.201312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.201414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.201512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.201521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.201663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.201778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.201788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.201861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.201932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.201942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.202044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.202170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.202181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.202359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.202466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.202475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.202574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.202685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.202695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.202769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.202937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.202948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.203128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.203311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.203321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.203445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.203539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.203549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.203656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.203753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.203764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.203873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.203981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.203991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.204169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.204291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.204301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.204427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.204604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.204614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.204685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.204884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.204895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.205004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.205100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.205110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.205213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.205387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.205397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.005 qpair failed and we were unable to recover it. 00:32:37.005 [2024-07-11 14:02:39.205501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.205603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.005 [2024-07-11 14:02:39.205612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.205713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.205815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.205825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.205959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.206149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.206163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.206289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.206388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.206398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.206566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.206672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.206681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.206762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.206864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.206873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.206980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.207216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.207228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.207342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.207450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.207461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.207547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.207710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.207721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.207897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.208034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.208044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.208167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.208402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.208413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.208531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.208643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.208654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.208759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.208859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.208868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.209075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.209186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.209196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.209376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.209490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.209499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.209689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.209796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.209805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.209920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.210048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.210057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.210174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.210275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.210285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.210385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.210538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.210548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.210720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 Malloc0 00:32:37.006 [2024-07-11 14:02:39.210884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.210893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.211060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.211257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.211267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.211370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 14:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.006 [2024-07-11 14:02:39.211547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.211558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.211678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.211774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.211784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 14:02:39 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:37.006 [2024-07-11 14:02:39.211966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.212099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.212109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 14:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.006 [2024-07-11 14:02:39.212210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.212390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.212399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4d00000b90 with addr=10.0.0.2, port=4420 00:32:37.006 14:02:39 -- common/autotest_common.sh@10 -- # set +x 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.213978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.214210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.214228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.214353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.214402] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.006 [2024-07-11 14:02:39.214538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.214552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.214680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.214948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.214960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.215148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.215260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.215275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.215468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.215597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.215610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.006 qpair failed and we were unable to recover it. 00:32:37.006 [2024-07-11 14:02:39.215789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.006 [2024-07-11 14:02:39.215895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.215908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.216114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.216243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.216256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.216392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.216512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.216526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.216639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.216759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.216773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.216952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.217130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.217144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.217349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.217452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.217465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.217605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.217846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.217859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.217966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.218077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.218090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.218199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.218305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.218318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.218437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.218607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.218621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.218809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.219050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.219063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.219182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.219294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.219307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.219494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.219612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.219625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.219804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.219932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.219945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.220043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.220170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.220183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.220298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.220403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.220416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.220616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.220725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.220738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.220858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.220964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.220977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.221095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.221210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.221224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.221419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.221551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.221564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.221757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.221932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.221945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.222126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.222245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.222259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.222425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.222543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.222556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 [2024-07-11 14:02:39.222743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.222936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.222949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 14:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.007 [2024-07-11 14:02:39.223070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.223191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.223204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 14:02:39 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:37.007 [2024-07-11 14:02:39.223499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.223619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.223632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.007 qpair failed and we were unable to recover it. 00:32:37.007 14:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.007 [2024-07-11 14:02:39.223829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.223956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.007 [2024-07-11 14:02:39.223970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 14:02:39 -- common/autotest_common.sh@10 -- # set +x 00:32:37.008 [2024-07-11 14:02:39.224153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.224489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.224504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.224616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.224721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.224734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.224847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.225041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.225054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.225229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.225350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.225363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.225479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.225599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.225612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.225736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.225865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.225878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.225988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.226190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.226204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.226314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.226436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.226452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.226562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.226647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.226660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.226770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.226873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.226886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.227060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.227192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.227205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.227313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.227484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.227497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.227616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.227772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.227785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.227966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.228082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.228095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.228270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.228377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.228390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.228651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.228770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.228782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.228912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.229032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.229045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.229219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.229323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.229336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.229583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.229699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.229712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.229890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.229995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.230009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.230196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.230373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.230386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.230519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.230692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.230705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.230826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.230945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.230958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.231111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 14:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.008 [2024-07-11 14:02:39.231201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.231214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.231337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.231449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.231462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 14:02:39 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:37.008 [2024-07-11 14:02:39.231567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.231678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.231691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 14:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.008 [2024-07-11 14:02:39.231854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.231980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.231993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 14:02:39 -- common/autotest_common.sh@10 -- # set +x 00:32:37.008 [2024-07-11 14:02:39.232109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.232293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.232307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.232554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.232672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.232685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.008 [2024-07-11 14:02:39.232814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.232922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.008 [2024-07-11 14:02:39.232935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.008 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.233121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.233233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.233247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.233363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.233541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.233553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.233746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.233871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.233884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.234130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.234244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.234258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.234386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.234473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.234486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.234671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.234779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.234792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.234914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.235035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.235048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.235162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.235287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.235300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.235423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.235666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.235679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.235861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.235972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.235985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.236109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.236238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.236252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.236363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.236564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.236577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.236690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.236810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.236822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.236946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.237026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.237038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.237174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.237288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.237302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.237487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.237662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.237675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.237779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.237951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.237964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.238076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.238219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.238236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.238345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.238529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.238542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.238726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.238847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.238860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.239031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 14:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.009 [2024-07-11 14:02:39.239204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.239218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.239466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 14:02:39 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:37.009 [2024-07-11 14:02:39.239591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.239604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.239782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 14:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.009 [2024-07-11 14:02:39.239975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.239988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 14:02:39 -- common/autotest_common.sh@10 -- # set +x 00:32:37.009 [2024-07-11 14:02:39.240100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.240210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.240224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.240436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.240603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.240616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.240736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.240912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.240925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.241036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.241224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.241238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.241448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.241569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.241582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.241699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.241823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.241836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.242019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.242192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.009 [2024-07-11 14:02:39.242206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.009 qpair failed and we were unable to recover it. 00:32:37.009 [2024-07-11 14:02:39.242327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.010 [2024-07-11 14:02:39.242461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.010 [2024-07-11 14:02:39.242474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1413710 with addr=10.0.0.2, port=4420 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 [2024-07-11 14:02:39.242580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.010 [2024-07-11 14:02:39.242637] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.010 [2024-07-11 14:02:39.245464] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:32:37.010 [2024-07-11 14:02:39.245512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1413710 (107): Transport endpoint is not connected 00:32:37.010 [2024-07-11 14:02:39.245568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 14:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.010 14:02:39 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:37.010 14:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:37.010 14:02:39 -- common/autotest_common.sh@10 -- # set +x 00:32:37.010 [2024-07-11 14:02:39.254995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.255126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.255149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.255169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.255179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.255200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 14:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 14:02:39 -- host/target_disconnect.sh@58 -- # wait 1789658 00:32:37.010 [2024-07-11 14:02:39.264952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.265024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.265042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.265053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.265060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.265076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 [2024-07-11 14:02:39.274931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.275033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.275051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.275058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.275064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.275078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 [2024-07-11 14:02:39.284940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.285011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.285028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.285034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.285041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.285055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 [2024-07-11 14:02:39.294953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.295022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.295039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.295046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.295052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.295066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 [2024-07-11 14:02:39.304958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.305026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.305044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.305051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.305057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.305072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 [2024-07-11 14:02:39.315020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.315091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.315108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.315115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.315121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.315135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 [2024-07-11 14:02:39.325059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.325130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.325148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.325155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.325166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.325180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 [2024-07-11 14:02:39.335087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.335156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.335176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.335182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.335188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.335202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 [2024-07-11 14:02:39.345102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.345167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.345185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.345192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.345198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.345213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 [2024-07-11 14:02:39.355138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.355211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.355228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.355238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.355244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.355259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 [2024-07-11 14:02:39.365167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.365249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.365266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.365273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.365279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.365294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.010 qpair failed and we were unable to recover it. 00:32:37.010 [2024-07-11 14:02:39.375193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.010 [2024-07-11 14:02:39.375265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.010 [2024-07-11 14:02:39.375284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.010 [2024-07-11 14:02:39.375291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.010 [2024-07-11 14:02:39.375297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.010 [2024-07-11 14:02:39.375312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.011 qpair failed and we were unable to recover it. 00:32:37.011 [2024-07-11 14:02:39.385244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.011 [2024-07-11 14:02:39.385315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.011 [2024-07-11 14:02:39.385331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.011 [2024-07-11 14:02:39.385338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.011 [2024-07-11 14:02:39.385344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.011 [2024-07-11 14:02:39.385359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.011 qpair failed and we were unable to recover it. 00:32:37.011 [2024-07-11 14:02:39.395248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.011 [2024-07-11 14:02:39.395314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.011 [2024-07-11 14:02:39.395330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.011 [2024-07-11 14:02:39.395337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.011 [2024-07-11 14:02:39.395343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.011 [2024-07-11 14:02:39.395357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.011 qpair failed and we were unable to recover it. 00:32:37.011 [2024-07-11 14:02:39.405332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.011 [2024-07-11 14:02:39.405443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.011 [2024-07-11 14:02:39.405460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.011 [2024-07-11 14:02:39.405468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.011 [2024-07-11 14:02:39.405474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.011 [2024-07-11 14:02:39.405488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.011 qpair failed and we were unable to recover it. 00:32:37.011 [2024-07-11 14:02:39.415254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.011 [2024-07-11 14:02:39.415321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.011 [2024-07-11 14:02:39.415339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.011 [2024-07-11 14:02:39.415346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.011 [2024-07-11 14:02:39.415352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.011 [2024-07-11 14:02:39.415367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.011 qpair failed and we were unable to recover it. 00:32:37.011 [2024-07-11 14:02:39.425355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.011 [2024-07-11 14:02:39.425425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.011 [2024-07-11 14:02:39.425442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.011 [2024-07-11 14:02:39.425449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.011 [2024-07-11 14:02:39.425455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.011 [2024-07-11 14:02:39.425469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.011 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.435326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.435434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.435451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.435458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.435464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.271 [2024-07-11 14:02:39.435479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.271 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.445491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.445570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.445587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.445597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.445604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.271 [2024-07-11 14:02:39.445619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.271 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.455507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.455571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.455591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.455597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.455603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.271 [2024-07-11 14:02:39.455617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.271 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.465499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.465562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.465578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.465585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.465591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.271 [2024-07-11 14:02:39.465604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.271 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.475516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.475584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.475604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.475611] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.475617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.271 [2024-07-11 14:02:39.475631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.271 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.485567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.485637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.485654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.485661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.485667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.271 [2024-07-11 14:02:39.485681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.271 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.495482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.495548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.495564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.495570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.495580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.271 [2024-07-11 14:02:39.495594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.271 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.505575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.505641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.505660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.505667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.505673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.271 [2024-07-11 14:02:39.505688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.271 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.515603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.515671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.515687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.515694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.515700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.271 [2024-07-11 14:02:39.515714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.271 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.525599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.525667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.525687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.525694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.525700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.271 [2024-07-11 14:02:39.525714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.271 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.535652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.535722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.535743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.535749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.535755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.271 [2024-07-11 14:02:39.535770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.271 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.545699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.545767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.545784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.545790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.545796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.271 [2024-07-11 14:02:39.545811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.271 qpair failed and we were unable to recover it. 00:32:37.271 [2024-07-11 14:02:39.555690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.271 [2024-07-11 14:02:39.555770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.271 [2024-07-11 14:02:39.555787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.271 [2024-07-11 14:02:39.555793] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.271 [2024-07-11 14:02:39.555800] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.555813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.565803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.565901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.565918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.565925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.565931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.565945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.575784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.575856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.575873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.575880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.575886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.575899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.585791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.585860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.585876] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.585883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.585889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.585903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.595854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.595922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.595942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.595948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.595954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.595969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.605877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.605951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.605968] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.605975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.605981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.605995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.615938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.616006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.616023] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.616030] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.616036] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.616050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.625862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.625932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.625952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.625959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.625965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.625979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.635952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.636021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.636038] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.636045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.636051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.636065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.645994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.646062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.646081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.646088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.646094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.646108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.655988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.656070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.656087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.656094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.656100] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.656114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.666021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.666086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.666105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.666112] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.666118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.666136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.676059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.676123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.676139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.676146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.676152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.676169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.685996] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.686067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.686083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.686090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.686097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.686110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.696110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.696181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.272 [2024-07-11 14:02:39.696198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.272 [2024-07-11 14:02:39.696205] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.272 [2024-07-11 14:02:39.696212] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.272 [2024-07-11 14:02:39.696227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.272 qpair failed and we were unable to recover it. 00:32:37.272 [2024-07-11 14:02:39.706119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.272 [2024-07-11 14:02:39.706231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.273 [2024-07-11 14:02:39.706248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.273 [2024-07-11 14:02:39.706254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.273 [2024-07-11 14:02:39.706261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.273 [2024-07-11 14:02:39.706275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.273 qpair failed and we were unable to recover it. 00:32:37.273 [2024-07-11 14:02:39.716177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.273 [2024-07-11 14:02:39.716256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.273 [2024-07-11 14:02:39.716278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.273 [2024-07-11 14:02:39.716286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.273 [2024-07-11 14:02:39.716292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.273 [2024-07-11 14:02:39.716307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.273 qpair failed and we were unable to recover it. 00:32:37.532 [2024-07-11 14:02:39.726206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.532 [2024-07-11 14:02:39.726272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.532 [2024-07-11 14:02:39.726289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.532 [2024-07-11 14:02:39.726296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.532 [2024-07-11 14:02:39.726303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.532 [2024-07-11 14:02:39.726317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.532 qpair failed and we were unable to recover it. 00:32:37.532 [2024-07-11 14:02:39.736137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.532 [2024-07-11 14:02:39.736205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.532 [2024-07-11 14:02:39.736222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.532 [2024-07-11 14:02:39.736229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.532 [2024-07-11 14:02:39.736235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.532 [2024-07-11 14:02:39.736250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.532 qpair failed and we were unable to recover it. 00:32:37.532 [2024-07-11 14:02:39.746274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.532 [2024-07-11 14:02:39.746344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.532 [2024-07-11 14:02:39.746360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.532 [2024-07-11 14:02:39.746366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.532 [2024-07-11 14:02:39.746373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.532 [2024-07-11 14:02:39.746387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.532 qpair failed and we were unable to recover it. 00:32:37.532 [2024-07-11 14:02:39.756288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.532 [2024-07-11 14:02:39.756373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.532 [2024-07-11 14:02:39.756390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.532 [2024-07-11 14:02:39.756397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.532 [2024-07-11 14:02:39.756403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.532 [2024-07-11 14:02:39.756422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.532 qpair failed and we were unable to recover it. 00:32:37.532 [2024-07-11 14:02:39.766313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.532 [2024-07-11 14:02:39.766382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.532 [2024-07-11 14:02:39.766398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.532 [2024-07-11 14:02:39.766405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.532 [2024-07-11 14:02:39.766412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.532 [2024-07-11 14:02:39.766426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.532 qpair failed and we were unable to recover it. 00:32:37.532 [2024-07-11 14:02:39.776342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.532 [2024-07-11 14:02:39.776411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.532 [2024-07-11 14:02:39.776428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.532 [2024-07-11 14:02:39.776435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.532 [2024-07-11 14:02:39.776441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.532 [2024-07-11 14:02:39.776455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.532 qpair failed and we were unable to recover it. 00:32:37.532 [2024-07-11 14:02:39.786365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.532 [2024-07-11 14:02:39.786431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.532 [2024-07-11 14:02:39.786447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.532 [2024-07-11 14:02:39.786454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.532 [2024-07-11 14:02:39.786460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.532 [2024-07-11 14:02:39.786475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.532 qpair failed and we were unable to recover it. 00:32:37.532 [2024-07-11 14:02:39.796403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.532 [2024-07-11 14:02:39.796471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.532 [2024-07-11 14:02:39.796490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.532 [2024-07-11 14:02:39.796497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.532 [2024-07-11 14:02:39.796503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.532 [2024-07-11 14:02:39.796517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.532 qpair failed and we were unable to recover it. 00:32:37.532 [2024-07-11 14:02:39.806407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.532 [2024-07-11 14:02:39.806476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.532 [2024-07-11 14:02:39.806496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.532 [2024-07-11 14:02:39.806502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.532 [2024-07-11 14:02:39.806509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.532 [2024-07-11 14:02:39.806523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.532 qpair failed and we were unable to recover it. 00:32:37.532 [2024-07-11 14:02:39.816449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.532 [2024-07-11 14:02:39.816518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.532 [2024-07-11 14:02:39.816534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.532 [2024-07-11 14:02:39.816541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.532 [2024-07-11 14:02:39.816547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.532 [2024-07-11 14:02:39.816561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.532 qpair failed and we were unable to recover it. 00:32:37.532 [2024-07-11 14:02:39.826504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.532 [2024-07-11 14:02:39.826575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.532 [2024-07-11 14:02:39.826592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.532 [2024-07-11 14:02:39.826598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.532 [2024-07-11 14:02:39.826605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.532 [2024-07-11 14:02:39.826619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.532 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.836486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.836554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.836569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.836576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.836582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.836597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.846564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.846677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.846694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.846701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.846707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.846724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.856584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.856643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.856658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.856665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.856671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.856685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.866654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.866760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.866777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.866785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.866792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.866806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.876707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.876801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.876818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.876825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.876831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.876845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.886675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.886740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.886755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.886762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.886768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.886782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.896661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.896732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.896751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.896758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.896764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.896778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.906738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.906822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.906839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.906846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.906853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.906867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.916770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.916847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.916863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.916870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.916877] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.916891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.926790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.926881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.926898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.926905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.926910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.926924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.936765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.936833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.936852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.936859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.936864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.936883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.946857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.947103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.947121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.947128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.947134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.947148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.956851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.956920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.956937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.956944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.956950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.956965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.966908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.533 [2024-07-11 14:02:39.966974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.533 [2024-07-11 14:02:39.966989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.533 [2024-07-11 14:02:39.966996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.533 [2024-07-11 14:02:39.967002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.533 [2024-07-11 14:02:39.967016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.533 qpair failed and we were unable to recover it. 00:32:37.533 [2024-07-11 14:02:39.976953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.534 [2024-07-11 14:02:39.977022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.534 [2024-07-11 14:02:39.977039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.534 [2024-07-11 14:02:39.977045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.534 [2024-07-11 14:02:39.977051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.534 [2024-07-11 14:02:39.977065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.534 qpair failed and we were unable to recover it. 00:32:37.794 [2024-07-11 14:02:39.986927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.794 [2024-07-11 14:02:39.987005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.794 [2024-07-11 14:02:39.987028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.794 [2024-07-11 14:02:39.987034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.794 [2024-07-11 14:02:39.987040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.794 [2024-07-11 14:02:39.987056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.794 qpair failed and we were unable to recover it. 00:32:37.794 [2024-07-11 14:02:39.997017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.794 [2024-07-11 14:02:39.997084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.794 [2024-07-11 14:02:39.997100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.794 [2024-07-11 14:02:39.997107] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.794 [2024-07-11 14:02:39.997113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.794 [2024-07-11 14:02:39.997127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.794 qpair failed and we were unable to recover it. 00:32:37.794 [2024-07-11 14:02:40.007025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.794 [2024-07-11 14:02:40.007294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.794 [2024-07-11 14:02:40.007313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.794 [2024-07-11 14:02:40.007362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.794 [2024-07-11 14:02:40.007369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.794 [2024-07-11 14:02:40.007395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.794 qpair failed and we were unable to recover it. 00:32:37.794 [2024-07-11 14:02:40.017075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.794 [2024-07-11 14:02:40.017167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.794 [2024-07-11 14:02:40.017185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.794 [2024-07-11 14:02:40.017194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.794 [2024-07-11 14:02:40.017201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.794 [2024-07-11 14:02:40.017216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.794 qpair failed and we were unable to recover it. 00:32:37.794 [2024-07-11 14:02:40.027123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.794 [2024-07-11 14:02:40.027226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.794 [2024-07-11 14:02:40.027243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.794 [2024-07-11 14:02:40.027250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.794 [2024-07-11 14:02:40.027260] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.794 [2024-07-11 14:02:40.027277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.794 qpair failed and we were unable to recover it. 00:32:37.794 [2024-07-11 14:02:40.037146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.794 [2024-07-11 14:02:40.037219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.794 [2024-07-11 14:02:40.037236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.794 [2024-07-11 14:02:40.037243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.794 [2024-07-11 14:02:40.037249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.794 [2024-07-11 14:02:40.037263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.794 qpair failed and we were unable to recover it. 00:32:37.794 [2024-07-11 14:02:40.047180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.794 [2024-07-11 14:02:40.047251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.794 [2024-07-11 14:02:40.047268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.794 [2024-07-11 14:02:40.047275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.794 [2024-07-11 14:02:40.047281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.794 [2024-07-11 14:02:40.047295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.794 qpair failed and we were unable to recover it. 00:32:37.794 [2024-07-11 14:02:40.057190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.794 [2024-07-11 14:02:40.057256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.794 [2024-07-11 14:02:40.057273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.794 [2024-07-11 14:02:40.057279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.794 [2024-07-11 14:02:40.057286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.794 [2024-07-11 14:02:40.057300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.794 qpair failed and we were unable to recover it. 00:32:37.794 [2024-07-11 14:02:40.067216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.794 [2024-07-11 14:02:40.067284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.794 [2024-07-11 14:02:40.067301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.794 [2024-07-11 14:02:40.067307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.794 [2024-07-11 14:02:40.067314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.794 [2024-07-11 14:02:40.067328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.794 qpair failed and we were unable to recover it. 00:32:37.794 [2024-07-11 14:02:40.077305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.794 [2024-07-11 14:02:40.077379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.794 [2024-07-11 14:02:40.077396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.794 [2024-07-11 14:02:40.077403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.794 [2024-07-11 14:02:40.077408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.794 [2024-07-11 14:02:40.077423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.794 qpair failed and we were unable to recover it. 00:32:37.794 [2024-07-11 14:02:40.087211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.794 [2024-07-11 14:02:40.087280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.794 [2024-07-11 14:02:40.087298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.794 [2024-07-11 14:02:40.087305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.087311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.087325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.097305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.097418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.097434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.097441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.097447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.097462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.107328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.107387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.107403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.107409] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.107416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.107430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.117363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.117436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.117452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.117459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.117469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.117483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.127383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.127458] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.127475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.127481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.127487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.127501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.137374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.137441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.137457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.137464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.137470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.137484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.147458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.147527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.147544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.147551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.147557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.147571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.157489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.157563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.157579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.157586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.157592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.157606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.167443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.167515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.167531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.167538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.167544] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.167558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.177582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.177694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.177711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.177718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.177724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.177738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.187568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.187642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.187659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.187666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.187672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.187686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.197561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.197628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.197647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.197654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.197660] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.197674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.207625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.207704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.207721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.207727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.207737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.207751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.217663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.217730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.217747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.217753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.217760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.217773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.795 [2024-07-11 14:02:40.227686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.795 [2024-07-11 14:02:40.227756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.795 [2024-07-11 14:02:40.227772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.795 [2024-07-11 14:02:40.227779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.795 [2024-07-11 14:02:40.227785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.795 [2024-07-11 14:02:40.227799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.795 qpair failed and we were unable to recover it. 00:32:37.796 [2024-07-11 14:02:40.237717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.796 [2024-07-11 14:02:40.237784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.796 [2024-07-11 14:02:40.237803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.796 [2024-07-11 14:02:40.237810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.796 [2024-07-11 14:02:40.237816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.796 [2024-07-11 14:02:40.237829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.796 qpair failed and we were unable to recover it. 00:32:37.796 [2024-07-11 14:02:40.247741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:37.796 [2024-07-11 14:02:40.247809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:37.796 [2024-07-11 14:02:40.247828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:37.796 [2024-07-11 14:02:40.247835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:37.796 [2024-07-11 14:02:40.247841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:37.796 [2024-07-11 14:02:40.247856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:37.796 qpair failed and we were unable to recover it. 00:32:38.056 [2024-07-11 14:02:40.257774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.056 [2024-07-11 14:02:40.257845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.056 [2024-07-11 14:02:40.257863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.056 [2024-07-11 14:02:40.257871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.056 [2024-07-11 14:02:40.257876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.056 [2024-07-11 14:02:40.257891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.056 qpair failed and we were unable to recover it. 00:32:38.056 [2024-07-11 14:02:40.267757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.056 [2024-07-11 14:02:40.267822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.056 [2024-07-11 14:02:40.267838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.056 [2024-07-11 14:02:40.267845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.056 [2024-07-11 14:02:40.267851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.056 [2024-07-11 14:02:40.267868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.056 qpair failed and we were unable to recover it. 00:32:38.056 [2024-07-11 14:02:40.277795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.056 [2024-07-11 14:02:40.277863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.056 [2024-07-11 14:02:40.277881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.056 [2024-07-11 14:02:40.277888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.056 [2024-07-11 14:02:40.277894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.056 [2024-07-11 14:02:40.277909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.056 qpair failed and we were unable to recover it. 00:32:38.056 [2024-07-11 14:02:40.287901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.056 [2024-07-11 14:02:40.287983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.056 [2024-07-11 14:02:40.288000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.056 [2024-07-11 14:02:40.288007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.056 [2024-07-11 14:02:40.288013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.056 [2024-07-11 14:02:40.288027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.056 qpair failed and we were unable to recover it. 00:32:38.056 [2024-07-11 14:02:40.297930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.056 [2024-07-11 14:02:40.298035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.056 [2024-07-11 14:02:40.298052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.056 [2024-07-11 14:02:40.298059] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.056 [2024-07-11 14:02:40.298069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.056 [2024-07-11 14:02:40.298084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.056 qpair failed and we were unable to recover it. 00:32:38.056 [2024-07-11 14:02:40.307931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.056 [2024-07-11 14:02:40.308005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.056 [2024-07-11 14:02:40.308021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.056 [2024-07-11 14:02:40.308028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.056 [2024-07-11 14:02:40.308034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.056 [2024-07-11 14:02:40.308049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.056 qpair failed and we were unable to recover it. 00:32:38.056 [2024-07-11 14:02:40.317901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.056 [2024-07-11 14:02:40.317983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.056 [2024-07-11 14:02:40.318000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.056 [2024-07-11 14:02:40.318007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.056 [2024-07-11 14:02:40.318013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.056 [2024-07-11 14:02:40.318027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.056 qpair failed and we were unable to recover it. 00:32:38.056 [2024-07-11 14:02:40.327966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.056 [2024-07-11 14:02:40.328036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.328053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.328061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.328068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.328081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.338009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.338076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.338092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.338099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.338105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.338119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.348062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.348181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.348198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.348205] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.348212] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.348227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.358086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.358195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.358211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.358218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.358224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.358239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.368064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.368146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.368176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.368183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.368190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.368204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.378036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.378106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.378122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.378128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.378135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.378149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.388182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.388243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.388258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.388268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.388274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.388292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.398138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.398205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.398223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.398230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.398236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.398251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.408209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.408277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.408293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.408300] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.408307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.408322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.418267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.418335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.418351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.418358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.418364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.418379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.428267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.428331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.428347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.428353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.428359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.428374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.438301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.438418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.438435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.438442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.438448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.438462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.448363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.448432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.448449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.448456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.448462] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.448476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.458358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.458424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.458441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.458448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.458454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.057 [2024-07-11 14:02:40.458468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.057 qpair failed and we were unable to recover it. 00:32:38.057 [2024-07-11 14:02:40.468448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.057 [2024-07-11 14:02:40.468551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.057 [2024-07-11 14:02:40.468568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.057 [2024-07-11 14:02:40.468575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.057 [2024-07-11 14:02:40.468580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.058 [2024-07-11 14:02:40.468594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.058 qpair failed and we were unable to recover it. 00:32:38.058 [2024-07-11 14:02:40.478450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.058 [2024-07-11 14:02:40.478526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.058 [2024-07-11 14:02:40.478543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.058 [2024-07-11 14:02:40.478553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.058 [2024-07-11 14:02:40.478559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.058 [2024-07-11 14:02:40.478572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.058 qpair failed and we were unable to recover it. 00:32:38.058 [2024-07-11 14:02:40.488436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.058 [2024-07-11 14:02:40.488508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.058 [2024-07-11 14:02:40.488525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.058 [2024-07-11 14:02:40.488531] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.058 [2024-07-11 14:02:40.488537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.058 [2024-07-11 14:02:40.488551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.058 qpair failed and we were unable to recover it. 00:32:38.058 [2024-07-11 14:02:40.498416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.058 [2024-07-11 14:02:40.498484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.058 [2024-07-11 14:02:40.498500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.058 [2024-07-11 14:02:40.498507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.058 [2024-07-11 14:02:40.498513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.058 [2024-07-11 14:02:40.498527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.058 qpair failed and we were unable to recover it. 00:32:38.058 [2024-07-11 14:02:40.508463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.058 [2024-07-11 14:02:40.508528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.058 [2024-07-11 14:02:40.508544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.058 [2024-07-11 14:02:40.508550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.058 [2024-07-11 14:02:40.508556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.058 [2024-07-11 14:02:40.508574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.058 qpair failed and we were unable to recover it. 00:32:38.318 [2024-07-11 14:02:40.518486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.318 [2024-07-11 14:02:40.518556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.318 [2024-07-11 14:02:40.518573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.318 [2024-07-11 14:02:40.518580] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.318 [2024-07-11 14:02:40.518586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.318 [2024-07-11 14:02:40.518600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.318 qpair failed and we were unable to recover it. 00:32:38.318 [2024-07-11 14:02:40.528583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.318 [2024-07-11 14:02:40.528655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.318 [2024-07-11 14:02:40.528671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.318 [2024-07-11 14:02:40.528678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.318 [2024-07-11 14:02:40.528684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.318 [2024-07-11 14:02:40.528698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.318 qpair failed and we were unable to recover it. 00:32:38.318 [2024-07-11 14:02:40.538608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.318 [2024-07-11 14:02:40.538712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.318 [2024-07-11 14:02:40.538728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.318 [2024-07-11 14:02:40.538735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.318 [2024-07-11 14:02:40.538741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.318 [2024-07-11 14:02:40.538755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.318 qpair failed and we were unable to recover it. 00:32:38.318 [2024-07-11 14:02:40.548620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.318 [2024-07-11 14:02:40.548686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.318 [2024-07-11 14:02:40.548702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.318 [2024-07-11 14:02:40.548708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.318 [2024-07-11 14:02:40.548715] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.318 [2024-07-11 14:02:40.548728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.318 qpair failed and we were unable to recover it. 00:32:38.318 [2024-07-11 14:02:40.558687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.318 [2024-07-11 14:02:40.558756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.318 [2024-07-11 14:02:40.558773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.318 [2024-07-11 14:02:40.558780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.318 [2024-07-11 14:02:40.558786] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.318 [2024-07-11 14:02:40.558800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.318 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.568681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.568746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.568762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.568773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.568782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.568796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.578731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.578802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.578819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.578826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.578832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.578846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.588692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.588763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.588779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.588785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.588791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.588805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.598728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.598794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.598810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.598817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.598823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.598841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.608739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.608812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.608829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.608836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.608842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.608856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.618804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.618873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.618890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.618896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.618902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.618917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.628893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.628963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.628980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.628986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.628993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.629007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.638982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.639061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.639078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.639085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.639091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.639105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.648936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.649002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.649017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.649024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.649030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.649045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.658988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.659059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.659076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.659086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.659092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.659107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.668984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.669057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.669074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.669080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.669086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.669100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.679008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.679074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.679090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.679096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.679103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.679116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.689066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.689136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.689153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.689165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.689171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.689185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.699071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.699139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.699156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.319 [2024-07-11 14:02:40.699166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.319 [2024-07-11 14:02:40.699173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.319 [2024-07-11 14:02:40.699187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.319 qpair failed and we were unable to recover it. 00:32:38.319 [2024-07-11 14:02:40.709137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.319 [2024-07-11 14:02:40.709200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.319 [2024-07-11 14:02:40.709216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.320 [2024-07-11 14:02:40.709223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.320 [2024-07-11 14:02:40.709233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.320 [2024-07-11 14:02:40.709247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.320 qpair failed and we were unable to recover it. 00:32:38.320 [2024-07-11 14:02:40.719106] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.320 [2024-07-11 14:02:40.719178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.320 [2024-07-11 14:02:40.719195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.320 [2024-07-11 14:02:40.719202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.320 [2024-07-11 14:02:40.719212] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.320 [2024-07-11 14:02:40.719226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.320 qpair failed and we were unable to recover it. 00:32:38.320 [2024-07-11 14:02:40.729068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.320 [2024-07-11 14:02:40.729141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.320 [2024-07-11 14:02:40.729158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.320 [2024-07-11 14:02:40.729169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.320 [2024-07-11 14:02:40.729175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.320 [2024-07-11 14:02:40.729189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.320 qpair failed and we were unable to recover it. 00:32:38.320 [2024-07-11 14:02:40.739184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.320 [2024-07-11 14:02:40.739254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.320 [2024-07-11 14:02:40.739272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.320 [2024-07-11 14:02:40.739280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.320 [2024-07-11 14:02:40.739285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.320 [2024-07-11 14:02:40.739299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.320 qpair failed and we were unable to recover it. 00:32:38.320 [2024-07-11 14:02:40.749209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.320 [2024-07-11 14:02:40.749279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.320 [2024-07-11 14:02:40.749299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.320 [2024-07-11 14:02:40.749306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.320 [2024-07-11 14:02:40.749312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.320 [2024-07-11 14:02:40.749326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.320 qpair failed and we were unable to recover it. 00:32:38.320 [2024-07-11 14:02:40.759259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.320 [2024-07-11 14:02:40.759354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.320 [2024-07-11 14:02:40.759370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.320 [2024-07-11 14:02:40.759377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.320 [2024-07-11 14:02:40.759383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.320 [2024-07-11 14:02:40.759398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.320 qpair failed and we were unable to recover it. 00:32:38.320 [2024-07-11 14:02:40.769280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.320 [2024-07-11 14:02:40.769345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.320 [2024-07-11 14:02:40.769360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.320 [2024-07-11 14:02:40.769367] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.320 [2024-07-11 14:02:40.769373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.320 [2024-07-11 14:02:40.769390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.320 qpair failed and we were unable to recover it. 00:32:38.580 [2024-07-11 14:02:40.779266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.580 [2024-07-11 14:02:40.779341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.580 [2024-07-11 14:02:40.779358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.580 [2024-07-11 14:02:40.779364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.580 [2024-07-11 14:02:40.779371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.580 [2024-07-11 14:02:40.779385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.580 qpair failed and we were unable to recover it. 00:32:38.580 [2024-07-11 14:02:40.789286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.580 [2024-07-11 14:02:40.789359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.580 [2024-07-11 14:02:40.789375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.580 [2024-07-11 14:02:40.789381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.580 [2024-07-11 14:02:40.789387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.580 [2024-07-11 14:02:40.789401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.580 qpair failed and we were unable to recover it. 00:32:38.580 [2024-07-11 14:02:40.799395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.580 [2024-07-11 14:02:40.799460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.580 [2024-07-11 14:02:40.799476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.580 [2024-07-11 14:02:40.799483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.580 [2024-07-11 14:02:40.799489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.580 [2024-07-11 14:02:40.799503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.580 qpair failed and we were unable to recover it. 00:32:38.580 [2024-07-11 14:02:40.809368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.580 [2024-07-11 14:02:40.809430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.580 [2024-07-11 14:02:40.809446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.580 [2024-07-11 14:02:40.809453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.580 [2024-07-11 14:02:40.809459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.580 [2024-07-11 14:02:40.809473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.580 qpair failed and we were unable to recover it. 00:32:38.580 [2024-07-11 14:02:40.819427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.819501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.819517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.819524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.819530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.819544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.829450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.829519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.829536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.829543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.829549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.829563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.839486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.839555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.839575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.839582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.839587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.839601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.849551] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.849655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.849671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.849677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.849683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.849697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.859528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.859598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.859614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.859621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.859627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.859641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.869570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.869634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.869654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.869661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.869667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.869681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.879568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.879639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.879656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.879663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.879669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.879689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.889630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.889742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.889759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.889765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.889772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.889787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.899681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.899749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.899765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.899772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.899778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.899792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.909668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.909734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.909751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.909757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.909763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.909778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.919732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.919798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.919813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.919820] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.919826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.919840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.929745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.929812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.929832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.929839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.929845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.929859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.939805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.939875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.939892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.939899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.939905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.939919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.949812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.949885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.949901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.949907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.949913] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.581 [2024-07-11 14:02:40.949927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.581 qpair failed and we were unable to recover it. 00:32:38.581 [2024-07-11 14:02:40.959884] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.581 [2024-07-11 14:02:40.959949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.581 [2024-07-11 14:02:40.959965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.581 [2024-07-11 14:02:40.959971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.581 [2024-07-11 14:02:40.959977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.582 [2024-07-11 14:02:40.959991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.582 qpair failed and we were unable to recover it. 00:32:38.582 [2024-07-11 14:02:40.969777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.582 [2024-07-11 14:02:40.969849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.582 [2024-07-11 14:02:40.969866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.582 [2024-07-11 14:02:40.969872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.582 [2024-07-11 14:02:40.969879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.582 [2024-07-11 14:02:40.969896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.582 qpair failed and we were unable to recover it. 00:32:38.582 [2024-07-11 14:02:40.979875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.582 [2024-07-11 14:02:40.979941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.582 [2024-07-11 14:02:40.979958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.582 [2024-07-11 14:02:40.979964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.582 [2024-07-11 14:02:40.979971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.582 [2024-07-11 14:02:40.979986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.582 qpair failed and we were unable to recover it. 00:32:38.582 [2024-07-11 14:02:40.989896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.582 [2024-07-11 14:02:40.989964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.582 [2024-07-11 14:02:40.989981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.582 [2024-07-11 14:02:40.989988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.582 [2024-07-11 14:02:40.989994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.582 [2024-07-11 14:02:40.990008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.582 qpair failed and we were unable to recover it. 00:32:38.582 [2024-07-11 14:02:41.000007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.582 [2024-07-11 14:02:41.000074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.582 [2024-07-11 14:02:41.000089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.582 [2024-07-11 14:02:41.000097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.582 [2024-07-11 14:02:41.000107] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.582 [2024-07-11 14:02:41.000122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.582 qpair failed and we were unable to recover it. 00:32:38.582 [2024-07-11 14:02:41.010027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.582 [2024-07-11 14:02:41.010112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.582 [2024-07-11 14:02:41.010130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.582 [2024-07-11 14:02:41.010137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.582 [2024-07-11 14:02:41.010144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.582 [2024-07-11 14:02:41.010163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.582 qpair failed and we were unable to recover it. 00:32:38.582 [2024-07-11 14:02:41.019996] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.582 [2024-07-11 14:02:41.020066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.582 [2024-07-11 14:02:41.020086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.582 [2024-07-11 14:02:41.020093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.582 [2024-07-11 14:02:41.020099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.582 [2024-07-11 14:02:41.020114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.582 qpair failed and we were unable to recover it. 00:32:38.582 [2024-07-11 14:02:41.030055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.582 [2024-07-11 14:02:41.030125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.582 [2024-07-11 14:02:41.030142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.582 [2024-07-11 14:02:41.030148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.582 [2024-07-11 14:02:41.030154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.582 [2024-07-11 14:02:41.030172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.582 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.040069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.040138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.040154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.040165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.040172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.040186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.050097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.050171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.050187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.050194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.050200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.050214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.060112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.060184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.060201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.060208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.060214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.060231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.070184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.070289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.070305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.070312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.070318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.070332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.080191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.080256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.080271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.080278] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.080284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.080298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.090241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.090328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.090344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.090350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.090356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.090370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.100194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.100263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.100280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.100286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.100292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.100306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.110283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.110348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.110369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.110376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.110382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.110396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.120339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.120408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.120439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.120445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.120451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.120465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.130367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.130448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.130465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.130471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.130477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.130492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.140367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.140433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.140452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.140459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.140465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.140479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.150426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.150497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.150514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.150521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.150527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.150545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.160460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.160552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.160568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.160575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.843 [2024-07-11 14:02:41.160582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.843 [2024-07-11 14:02:41.160596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.843 qpair failed and we were unable to recover it. 00:32:38.843 [2024-07-11 14:02:41.170508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.843 [2024-07-11 14:02:41.170581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.843 [2024-07-11 14:02:41.170598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.843 [2024-07-11 14:02:41.170605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.170611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.170625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:38.844 [2024-07-11 14:02:41.180502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.844 [2024-07-11 14:02:41.180571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.844 [2024-07-11 14:02:41.180588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.844 [2024-07-11 14:02:41.180594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.180600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.180615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:38.844 [2024-07-11 14:02:41.190475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.844 [2024-07-11 14:02:41.190560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.844 [2024-07-11 14:02:41.190577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.844 [2024-07-11 14:02:41.190584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.190589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.190603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:38.844 [2024-07-11 14:02:41.200570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.844 [2024-07-11 14:02:41.200638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.844 [2024-07-11 14:02:41.200658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.844 [2024-07-11 14:02:41.200665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.200671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.200685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:38.844 [2024-07-11 14:02:41.210621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.844 [2024-07-11 14:02:41.210732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.844 [2024-07-11 14:02:41.210748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.844 [2024-07-11 14:02:41.210755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.210762] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.210777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:38.844 [2024-07-11 14:02:41.220566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.844 [2024-07-11 14:02:41.220631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.844 [2024-07-11 14:02:41.220646] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.844 [2024-07-11 14:02:41.220653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.220659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.220674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:38.844 [2024-07-11 14:02:41.230653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.844 [2024-07-11 14:02:41.230719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.844 [2024-07-11 14:02:41.230738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.844 [2024-07-11 14:02:41.230745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.230751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.230764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:38.844 [2024-07-11 14:02:41.240722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.844 [2024-07-11 14:02:41.240815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.844 [2024-07-11 14:02:41.240832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.844 [2024-07-11 14:02:41.240838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.240848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.240862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:38.844 [2024-07-11 14:02:41.250706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.844 [2024-07-11 14:02:41.250776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.844 [2024-07-11 14:02:41.250793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.844 [2024-07-11 14:02:41.250799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.250805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.250819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:38.844 [2024-07-11 14:02:41.260739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.844 [2024-07-11 14:02:41.260807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.844 [2024-07-11 14:02:41.260823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.844 [2024-07-11 14:02:41.260829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.260835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.260849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:38.844 [2024-07-11 14:02:41.270787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.844 [2024-07-11 14:02:41.270867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.844 [2024-07-11 14:02:41.270883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.844 [2024-07-11 14:02:41.270889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.270895] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.270910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:38.844 [2024-07-11 14:02:41.280842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.844 [2024-07-11 14:02:41.280911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.844 [2024-07-11 14:02:41.280930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.844 [2024-07-11 14:02:41.280937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.280943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.280957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:38.844 [2024-07-11 14:02:41.290824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:38.844 [2024-07-11 14:02:41.290895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:38.844 [2024-07-11 14:02:41.290911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:38.844 [2024-07-11 14:02:41.290918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:38.844 [2024-07-11 14:02:41.290939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:38.844 [2024-07-11 14:02:41.290953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:38.844 qpair failed and we were unable to recover it. 00:32:39.105 [2024-07-11 14:02:41.300874] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.105 [2024-07-11 14:02:41.300941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.105 [2024-07-11 14:02:41.300957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.105 [2024-07-11 14:02:41.300964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.105 [2024-07-11 14:02:41.300974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.105 [2024-07-11 14:02:41.300987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.105 qpair failed and we were unable to recover it. 00:32:39.105 [2024-07-11 14:02:41.310904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.105 [2024-07-11 14:02:41.310970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.105 [2024-07-11 14:02:41.310987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.105 [2024-07-11 14:02:41.310994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.105 [2024-07-11 14:02:41.311000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.105 [2024-07-11 14:02:41.311014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.105 qpair failed and we were unable to recover it. 00:32:39.105 [2024-07-11 14:02:41.320903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.105 [2024-07-11 14:02:41.320975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.105 [2024-07-11 14:02:41.320991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.105 [2024-07-11 14:02:41.320998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.105 [2024-07-11 14:02:41.321004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.105 [2024-07-11 14:02:41.321017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.105 qpair failed and we were unable to recover it. 00:32:39.105 [2024-07-11 14:02:41.330882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.105 [2024-07-11 14:02:41.330950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.105 [2024-07-11 14:02:41.330966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.105 [2024-07-11 14:02:41.330973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.105 [2024-07-11 14:02:41.330983] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.105 [2024-07-11 14:02:41.330997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.105 qpair failed and we were unable to recover it. 00:32:39.105 [2024-07-11 14:02:41.340947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.105 [2024-07-11 14:02:41.341015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.105 [2024-07-11 14:02:41.341031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.105 [2024-07-11 14:02:41.341038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.105 [2024-07-11 14:02:41.341043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.105 [2024-07-11 14:02:41.341057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.105 qpair failed and we were unable to recover it. 00:32:39.105 [2024-07-11 14:02:41.350924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.105 [2024-07-11 14:02:41.351019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.105 [2024-07-11 14:02:41.351036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.105 [2024-07-11 14:02:41.351043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.105 [2024-07-11 14:02:41.351049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.105 [2024-07-11 14:02:41.351063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.105 qpair failed and we were unable to recover it. 00:32:39.105 [2024-07-11 14:02:41.361041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.105 [2024-07-11 14:02:41.361113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.105 [2024-07-11 14:02:41.361130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.105 [2024-07-11 14:02:41.361137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.105 [2024-07-11 14:02:41.361143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.105 [2024-07-11 14:02:41.361157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.105 qpair failed and we were unable to recover it. 00:32:39.105 [2024-07-11 14:02:41.371040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.105 [2024-07-11 14:02:41.371107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.105 [2024-07-11 14:02:41.371123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.105 [2024-07-11 14:02:41.371129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.105 [2024-07-11 14:02:41.371139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.371153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.381096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.381168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.381184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.381191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.381197] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.381211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.391145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.391218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.391235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.391244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.391250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.391264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.401130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.401200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.401216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.401223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.401229] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.401243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.411180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.411247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.411263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.411270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.411276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.411290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.421192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.421253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.421275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.421282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.421291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.421307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.431245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.431322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.431339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.431345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.431352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.431366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.441271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.441346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.441361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.441368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.441375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.441389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.451419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.451523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.451539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.451546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.451552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.451567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.461334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.461396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.461417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.461423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.461429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.461443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.471453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.471556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.471573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.471580] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.471586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.471601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.481481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.481582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.481599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.481605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.481612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.481625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.491429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.491509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.491526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.491533] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.491539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.491553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.501434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.501496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.501512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.501518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.501524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.501538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.511457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.511559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.106 [2024-07-11 14:02:41.511576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.106 [2024-07-11 14:02:41.511582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.106 [2024-07-11 14:02:41.511592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.106 [2024-07-11 14:02:41.511606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.106 qpair failed and we were unable to recover it. 00:32:39.106 [2024-07-11 14:02:41.521436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.106 [2024-07-11 14:02:41.521503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.107 [2024-07-11 14:02:41.521519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.107 [2024-07-11 14:02:41.521525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.107 [2024-07-11 14:02:41.521531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.107 [2024-07-11 14:02:41.521545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.107 qpair failed and we were unable to recover it. 00:32:39.107 [2024-07-11 14:02:41.531535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.107 [2024-07-11 14:02:41.531605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.107 [2024-07-11 14:02:41.531621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.107 [2024-07-11 14:02:41.531627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.107 [2024-07-11 14:02:41.531633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.107 [2024-07-11 14:02:41.531647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.107 qpair failed and we were unable to recover it. 00:32:39.107 [2024-07-11 14:02:41.541619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.107 [2024-07-11 14:02:41.541729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.107 [2024-07-11 14:02:41.541745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.107 [2024-07-11 14:02:41.541752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.107 [2024-07-11 14:02:41.541760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.107 [2024-07-11 14:02:41.541775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.107 qpair failed and we were unable to recover it. 00:32:39.107 [2024-07-11 14:02:41.551616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.107 [2024-07-11 14:02:41.551728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.107 [2024-07-11 14:02:41.551744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.107 [2024-07-11 14:02:41.551751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.107 [2024-07-11 14:02:41.551757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.107 [2024-07-11 14:02:41.551772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.107 qpair failed and we were unable to recover it. 00:32:39.367 [2024-07-11 14:02:41.561670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.367 [2024-07-11 14:02:41.561738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.367 [2024-07-11 14:02:41.561754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.367 [2024-07-11 14:02:41.561760] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.367 [2024-07-11 14:02:41.561766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.367 [2024-07-11 14:02:41.561780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.367 qpair failed and we were unable to recover it. 00:32:39.367 [2024-07-11 14:02:41.571581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.367 [2024-07-11 14:02:41.571652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.367 [2024-07-11 14:02:41.571676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.367 [2024-07-11 14:02:41.571683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.367 [2024-07-11 14:02:41.571689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.367 [2024-07-11 14:02:41.571702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.367 qpair failed and we were unable to recover it. 00:32:39.367 [2024-07-11 14:02:41.581703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.367 [2024-07-11 14:02:41.581807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.367 [2024-07-11 14:02:41.581823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.367 [2024-07-11 14:02:41.581830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.367 [2024-07-11 14:02:41.581836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.367 [2024-07-11 14:02:41.581850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.367 qpair failed and we were unable to recover it. 00:32:39.367 [2024-07-11 14:02:41.591779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.367 [2024-07-11 14:02:41.591840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.367 [2024-07-11 14:02:41.591859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.367 [2024-07-11 14:02:41.591866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.367 [2024-07-11 14:02:41.591872] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.367 [2024-07-11 14:02:41.591886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.367 qpair failed and we were unable to recover it. 00:32:39.367 [2024-07-11 14:02:41.601764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.367 [2024-07-11 14:02:41.601831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.367 [2024-07-11 14:02:41.601846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.367 [2024-07-11 14:02:41.601859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.367 [2024-07-11 14:02:41.601866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.367 [2024-07-11 14:02:41.601882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.367 qpair failed and we were unable to recover it. 00:32:39.367 [2024-07-11 14:02:41.611792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.367 [2024-07-11 14:02:41.611862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.367 [2024-07-11 14:02:41.611878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.367 [2024-07-11 14:02:41.611885] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.367 [2024-07-11 14:02:41.611891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.367 [2024-07-11 14:02:41.611905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.367 qpair failed and we were unable to recover it. 00:32:39.367 [2024-07-11 14:02:41.621816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.367 [2024-07-11 14:02:41.621884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.367 [2024-07-11 14:02:41.621900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.367 [2024-07-11 14:02:41.621906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.367 [2024-07-11 14:02:41.621912] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.367 [2024-07-11 14:02:41.621926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.367 qpair failed and we were unable to recover it. 00:32:39.367 [2024-07-11 14:02:41.631853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.367 [2024-07-11 14:02:41.631929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.367 [2024-07-11 14:02:41.631948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.367 [2024-07-11 14:02:41.631954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.367 [2024-07-11 14:02:41.631961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.367 [2024-07-11 14:02:41.631975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.367 qpair failed and we were unable to recover it. 00:32:39.367 [2024-07-11 14:02:41.641900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.367 [2024-07-11 14:02:41.641973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.367 [2024-07-11 14:02:41.641989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.367 [2024-07-11 14:02:41.641995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.367 [2024-07-11 14:02:41.642001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.367 [2024-07-11 14:02:41.642019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.367 qpair failed and we were unable to recover it. 00:32:39.367 [2024-07-11 14:02:41.651883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.367 [2024-07-11 14:02:41.651950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.367 [2024-07-11 14:02:41.651965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.367 [2024-07-11 14:02:41.651971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.367 [2024-07-11 14:02:41.651978] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.367 [2024-07-11 14:02:41.651992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.367 qpair failed and we were unable to recover it. 00:32:39.367 [2024-07-11 14:02:41.661938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.662006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.662021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.662028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.662034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.662048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.671962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.672040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.672057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.672064] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.672070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.672085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.682001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.682076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.682092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.682098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.682104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.682118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.692016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.692100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.692117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.692127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.692133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.692147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.702046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.702119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.702135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.702141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.702147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.702169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.712098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.712163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.712179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.712185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.712191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.712205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.722103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.722183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.722202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.722212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.722220] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.722236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.732130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.732199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.732216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.732224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.732230] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.732244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.742103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.742168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.742184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.742190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.742196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.742210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.752104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.752179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.752195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.752202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.752211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.752228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.762179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.762247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.762264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.762271] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.762277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.762291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.772209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.772276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.772292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.772299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.772305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.772318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.782275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.782435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.782452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.782462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.782469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.782484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.792241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.792306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.792323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.368 [2024-07-11 14:02:41.792329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.368 [2024-07-11 14:02:41.792335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.368 [2024-07-11 14:02:41.792349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.368 qpair failed and we were unable to recover it. 00:32:39.368 [2024-07-11 14:02:41.802341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.368 [2024-07-11 14:02:41.802409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.368 [2024-07-11 14:02:41.802426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.369 [2024-07-11 14:02:41.802433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.369 [2024-07-11 14:02:41.802439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.369 [2024-07-11 14:02:41.802453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.369 qpair failed and we were unable to recover it. 00:32:39.369 [2024-07-11 14:02:41.812400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.369 [2024-07-11 14:02:41.812470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.369 [2024-07-11 14:02:41.812486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.369 [2024-07-11 14:02:41.812492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.369 [2024-07-11 14:02:41.812499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.369 [2024-07-11 14:02:41.812513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.369 qpair failed and we were unable to recover it. 00:32:39.628 [2024-07-11 14:02:41.822404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.628 [2024-07-11 14:02:41.822508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.822525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.822532] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.822538] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.822553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.832341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.832415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.832431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.832437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.832446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.832460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.842455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.842531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.842547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.842554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.842562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.842576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.852511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.852578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.852594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.852601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.852607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.852621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.862536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.862615] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.862632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.862639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.862646] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.862660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.872534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.872601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.872616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.872625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.872632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.872648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.882565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.882633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.882652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.882659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.882665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.882679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.892639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.892706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.892721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.892728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.892734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.892747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.902674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.902734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.902753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.902759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.902765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.902779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.912797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.912867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.912883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.912889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.912896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.912913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.922676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.922744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.922760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.922766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.922772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.922786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.932671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.932738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.932755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.932761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.932768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.932781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.942755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.942841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.942857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.942864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.942871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.942885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.952812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.952881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.952897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.952904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.629 [2024-07-11 14:02:41.952910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.629 [2024-07-11 14:02:41.952925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.629 qpair failed and we were unable to recover it. 00:32:39.629 [2024-07-11 14:02:41.962843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.629 [2024-07-11 14:02:41.962908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.629 [2024-07-11 14:02:41.962927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.629 [2024-07-11 14:02:41.962933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.630 [2024-07-11 14:02:41.962939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.630 [2024-07-11 14:02:41.962954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.630 qpair failed and we were unable to recover it. 00:32:39.630 [2024-07-11 14:02:41.972924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.630 [2024-07-11 14:02:41.972996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.630 [2024-07-11 14:02:41.973011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.630 [2024-07-11 14:02:41.973018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.630 [2024-07-11 14:02:41.973024] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.630 [2024-07-11 14:02:41.973038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.630 qpair failed and we were unable to recover it. 00:32:39.630 [2024-07-11 14:02:41.982891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.630 [2024-07-11 14:02:41.982958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.630 [2024-07-11 14:02:41.982974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.630 [2024-07-11 14:02:41.982980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.630 [2024-07-11 14:02:41.982986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.630 [2024-07-11 14:02:41.983000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.630 qpair failed and we were unable to recover it. 00:32:39.630 [2024-07-11 14:02:41.992869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.630 [2024-07-11 14:02:41.992963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.630 [2024-07-11 14:02:41.992979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.630 [2024-07-11 14:02:41.992985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.630 [2024-07-11 14:02:41.992991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.630 [2024-07-11 14:02:41.993006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.630 qpair failed and we were unable to recover it. 00:32:39.630 [2024-07-11 14:02:42.002938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.630 [2024-07-11 14:02:42.003013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.630 [2024-07-11 14:02:42.003029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.630 [2024-07-11 14:02:42.003038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.630 [2024-07-11 14:02:42.003044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.630 [2024-07-11 14:02:42.003059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.630 qpair failed and we were unable to recover it. 00:32:39.630 [2024-07-11 14:02:42.012982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.630 [2024-07-11 14:02:42.013063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.630 [2024-07-11 14:02:42.013083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.630 [2024-07-11 14:02:42.013090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.630 [2024-07-11 14:02:42.013098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.630 [2024-07-11 14:02:42.013113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.630 qpair failed and we were unable to recover it. 00:32:39.630 [2024-07-11 14:02:42.023008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.630 [2024-07-11 14:02:42.023076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.630 [2024-07-11 14:02:42.023093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.630 [2024-07-11 14:02:42.023100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.630 [2024-07-11 14:02:42.023106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.630 [2024-07-11 14:02:42.023120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.630 qpair failed and we were unable to recover it. 00:32:39.630 [2024-07-11 14:02:42.033111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.630 [2024-07-11 14:02:42.033178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.630 [2024-07-11 14:02:42.033194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.630 [2024-07-11 14:02:42.033200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.630 [2024-07-11 14:02:42.033207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.630 [2024-07-11 14:02:42.033221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.630 qpair failed and we were unable to recover it. 00:32:39.630 [2024-07-11 14:02:42.043053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.630 [2024-07-11 14:02:42.043127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.630 [2024-07-11 14:02:42.043143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.630 [2024-07-11 14:02:42.043150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.630 [2024-07-11 14:02:42.043157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.630 [2024-07-11 14:02:42.043176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.630 qpair failed and we were unable to recover it. 00:32:39.630 [2024-07-11 14:02:42.053068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.630 [2024-07-11 14:02:42.053131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.630 [2024-07-11 14:02:42.053166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.630 [2024-07-11 14:02:42.053174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.630 [2024-07-11 14:02:42.053180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.630 [2024-07-11 14:02:42.053195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.630 qpair failed and we were unable to recover it. 00:32:39.630 [2024-07-11 14:02:42.063067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.630 [2024-07-11 14:02:42.063130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.630 [2024-07-11 14:02:42.063146] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.630 [2024-07-11 14:02:42.063153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.630 [2024-07-11 14:02:42.063164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.630 [2024-07-11 14:02:42.063179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.630 qpair failed and we were unable to recover it. 00:32:39.630 [2024-07-11 14:02:42.073202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.630 [2024-07-11 14:02:42.073268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.630 [2024-07-11 14:02:42.073284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.630 [2024-07-11 14:02:42.073291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.630 [2024-07-11 14:02:42.073297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.630 [2024-07-11 14:02:42.073311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.630 qpair failed and we were unable to recover it. 00:32:39.892 [2024-07-11 14:02:42.083229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.892 [2024-07-11 14:02:42.083299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.892 [2024-07-11 14:02:42.083316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.892 [2024-07-11 14:02:42.083323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.892 [2024-07-11 14:02:42.083329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.892 [2024-07-11 14:02:42.083344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.892 qpair failed and we were unable to recover it. 00:32:39.892 [2024-07-11 14:02:42.093156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.892 [2024-07-11 14:02:42.093225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.892 [2024-07-11 14:02:42.093242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.892 [2024-07-11 14:02:42.093248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.892 [2024-07-11 14:02:42.093254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.892 [2024-07-11 14:02:42.093271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.892 qpair failed and we were unable to recover it. 00:32:39.892 [2024-07-11 14:02:42.103270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.892 [2024-07-11 14:02:42.103341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.892 [2024-07-11 14:02:42.103356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.892 [2024-07-11 14:02:42.103363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.892 [2024-07-11 14:02:42.103369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.892 [2024-07-11 14:02:42.103382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.892 qpair failed and we were unable to recover it. 00:32:39.892 [2024-07-11 14:02:42.113217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.892 [2024-07-11 14:02:42.113285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.892 [2024-07-11 14:02:42.113301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.892 [2024-07-11 14:02:42.113308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.892 [2024-07-11 14:02:42.113314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.892 [2024-07-11 14:02:42.113331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.892 qpair failed and we were unable to recover it. 00:32:39.892 [2024-07-11 14:02:42.123239] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.892 [2024-07-11 14:02:42.123307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.892 [2024-07-11 14:02:42.123322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.892 [2024-07-11 14:02:42.123329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.892 [2024-07-11 14:02:42.123335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.892 [2024-07-11 14:02:42.123349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.892 qpair failed and we were unable to recover it. 00:32:39.892 [2024-07-11 14:02:42.133324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.892 [2024-07-11 14:02:42.133395] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.892 [2024-07-11 14:02:42.133411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.892 [2024-07-11 14:02:42.133418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.892 [2024-07-11 14:02:42.133424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.892 [2024-07-11 14:02:42.133441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.892 qpair failed and we were unable to recover it. 00:32:39.892 [2024-07-11 14:02:42.143345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.892 [2024-07-11 14:02:42.143440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.892 [2024-07-11 14:02:42.143459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.892 [2024-07-11 14:02:42.143466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.892 [2024-07-11 14:02:42.143472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.892 [2024-07-11 14:02:42.143486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.892 qpair failed and we were unable to recover it. 00:32:39.892 [2024-07-11 14:02:42.153405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.892 [2024-07-11 14:02:42.153480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.892 [2024-07-11 14:02:42.153497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.892 [2024-07-11 14:02:42.153503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.892 [2024-07-11 14:02:42.153510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.892 [2024-07-11 14:02:42.153524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.892 qpair failed and we were unable to recover it. 00:32:39.892 [2024-07-11 14:02:42.163450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.892 [2024-07-11 14:02:42.163514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.892 [2024-07-11 14:02:42.163532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.892 [2024-07-11 14:02:42.163538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.892 [2024-07-11 14:02:42.163545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.892 [2024-07-11 14:02:42.163559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.892 qpair failed and we were unable to recover it. 00:32:39.892 [2024-07-11 14:02:42.173492] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.173568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.173584] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.173591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.173597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.173612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.183489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.183555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.183571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.183578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.183584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.183604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.193513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.193591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.193608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.193614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.193620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.193634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.203506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.203569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.203588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.203594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.203600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.203614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.213613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.213717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.213733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.213740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.213746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.213760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.223569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.223638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.223654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.223660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.223666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.223680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.233624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.233696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.233716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.233723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.233729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.233744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.243648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.243718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.243733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.243740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.243746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.243760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.253683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.253791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.253807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.253814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.253820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.253835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.263721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.263781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.263803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.263810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.263816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.263829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.273740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.273819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.273836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.273843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.273849] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.273870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.283784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.283858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.283873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.283880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.283887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.283901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.293820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.293889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.293905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.293911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.293917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.293932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.303838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.303906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.303921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.893 [2024-07-11 14:02:42.303928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.893 [2024-07-11 14:02:42.303934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.893 [2024-07-11 14:02:42.303948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.893 qpair failed and we were unable to recover it. 00:32:39.893 [2024-07-11 14:02:42.313842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.893 [2024-07-11 14:02:42.313912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.893 [2024-07-11 14:02:42.313927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.894 [2024-07-11 14:02:42.313934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.894 [2024-07-11 14:02:42.313940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.894 [2024-07-11 14:02:42.313954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.894 qpair failed and we were unable to recover it. 00:32:39.894 [2024-07-11 14:02:42.323938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.894 [2024-07-11 14:02:42.324020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.894 [2024-07-11 14:02:42.324040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.894 [2024-07-11 14:02:42.324046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.894 [2024-07-11 14:02:42.324052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.894 [2024-07-11 14:02:42.324066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.894 qpair failed and we were unable to recover it. 00:32:39.894 [2024-07-11 14:02:42.333921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.894 [2024-07-11 14:02:42.333986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.894 [2024-07-11 14:02:42.334002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.894 [2024-07-11 14:02:42.334010] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.894 [2024-07-11 14:02:42.334017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.894 [2024-07-11 14:02:42.334032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.894 qpair failed and we were unable to recover it. 00:32:39.894 [2024-07-11 14:02:42.343982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:39.894 [2024-07-11 14:02:42.344090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:39.894 [2024-07-11 14:02:42.344106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:39.894 [2024-07-11 14:02:42.344112] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:39.894 [2024-07-11 14:02:42.344118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:39.894 [2024-07-11 14:02:42.344132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:39.894 qpair failed and we were unable to recover it. 00:32:40.156 [2024-07-11 14:02:42.353990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.156 [2024-07-11 14:02:42.354061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.156 [2024-07-11 14:02:42.354077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.156 [2024-07-11 14:02:42.354085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.156 [2024-07-11 14:02:42.354092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.156 [2024-07-11 14:02:42.354106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.156 qpair failed and we were unable to recover it. 00:32:40.156 [2024-07-11 14:02:42.364020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.156 [2024-07-11 14:02:42.364099] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.156 [2024-07-11 14:02:42.364116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.156 [2024-07-11 14:02:42.364122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.156 [2024-07-11 14:02:42.364128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.156 [2024-07-11 14:02:42.364147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.156 qpair failed and we were unable to recover it. 00:32:40.156 [2024-07-11 14:02:42.374040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.156 [2024-07-11 14:02:42.374107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.156 [2024-07-11 14:02:42.374124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.156 [2024-07-11 14:02:42.374130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.156 [2024-07-11 14:02:42.374136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.156 [2024-07-11 14:02:42.374151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.156 qpair failed and we were unable to recover it. 00:32:40.156 [2024-07-11 14:02:42.384069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.156 [2024-07-11 14:02:42.384141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.156 [2024-07-11 14:02:42.384162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.156 [2024-07-11 14:02:42.384169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.156 [2024-07-11 14:02:42.384175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.156 [2024-07-11 14:02:42.384194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.156 qpair failed and we were unable to recover it. 00:32:40.156 [2024-07-11 14:02:42.394139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.156 [2024-07-11 14:02:42.394247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.156 [2024-07-11 14:02:42.394264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.156 [2024-07-11 14:02:42.394271] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.156 [2024-07-11 14:02:42.394277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.156 [2024-07-11 14:02:42.394291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.156 qpair failed and we were unable to recover it. 00:32:40.156 [2024-07-11 14:02:42.404155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.156 [2024-07-11 14:02:42.404261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.156 [2024-07-11 14:02:42.404278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.156 [2024-07-11 14:02:42.404285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.156 [2024-07-11 14:02:42.404291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.156 [2024-07-11 14:02:42.404305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.156 qpair failed and we were unable to recover it. 00:32:40.156 [2024-07-11 14:02:42.414184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.156 [2024-07-11 14:02:42.414284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.156 [2024-07-11 14:02:42.414304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.156 [2024-07-11 14:02:42.414310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.156 [2024-07-11 14:02:42.414317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.156 [2024-07-11 14:02:42.414331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.156 qpair failed and we were unable to recover it. 00:32:40.156 [2024-07-11 14:02:42.424190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.156 [2024-07-11 14:02:42.424266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.156 [2024-07-11 14:02:42.424282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.156 [2024-07-11 14:02:42.424289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.156 [2024-07-11 14:02:42.424295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.156 [2024-07-11 14:02:42.424310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.156 qpair failed and we were unable to recover it. 00:32:40.156 [2024-07-11 14:02:42.434200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.156 [2024-07-11 14:02:42.434267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.156 [2024-07-11 14:02:42.434284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.156 [2024-07-11 14:02:42.434291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.434296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.434310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.444247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.444326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.444342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.444349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.444356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.444370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.454281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.454361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.454377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.454384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.454394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.454409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.464273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.464338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.464355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.464362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.464368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.464382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.474338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.474399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.474416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.474422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.474428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.474446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.484370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.484436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.484452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.484458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.484464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.484478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.494396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.494465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.494482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.494489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.494495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.494509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.504393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.504460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.504480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.504486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.504492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.504507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.514454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.514517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.514532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.514539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.514545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.514559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.524492] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.524562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.524579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.524585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.524591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.524605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.534553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.534616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.534632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.534639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.534645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.534659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.544546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.544606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.544622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.544628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.544638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.544651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.554620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.554732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.554748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.554755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.554761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.554775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.564587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.564688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.157 [2024-07-11 14:02:42.564704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.157 [2024-07-11 14:02:42.564710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.157 [2024-07-11 14:02:42.564716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.157 [2024-07-11 14:02:42.564730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.157 qpair failed and we were unable to recover it. 00:32:40.157 [2024-07-11 14:02:42.574630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.157 [2024-07-11 14:02:42.574699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.158 [2024-07-11 14:02:42.574715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.158 [2024-07-11 14:02:42.574721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.158 [2024-07-11 14:02:42.574727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.158 [2024-07-11 14:02:42.574741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.158 qpair failed and we were unable to recover it. 00:32:40.158 [2024-07-11 14:02:42.584648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.158 [2024-07-11 14:02:42.584709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.158 [2024-07-11 14:02:42.584724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.158 [2024-07-11 14:02:42.584731] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.158 [2024-07-11 14:02:42.584737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.158 [2024-07-11 14:02:42.584751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.158 qpair failed and we were unable to recover it. 00:32:40.158 [2024-07-11 14:02:42.594680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.158 [2024-07-11 14:02:42.594750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.158 [2024-07-11 14:02:42.594767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.158 [2024-07-11 14:02:42.594773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.158 [2024-07-11 14:02:42.594779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.158 [2024-07-11 14:02:42.594793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.158 qpair failed and we were unable to recover it. 00:32:40.158 [2024-07-11 14:02:42.604780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.158 [2024-07-11 14:02:42.604847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.158 [2024-07-11 14:02:42.604865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.158 [2024-07-11 14:02:42.604872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.158 [2024-07-11 14:02:42.604878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.158 [2024-07-11 14:02:42.604892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.158 qpair failed and we were unable to recover it. 00:32:40.419 [2024-07-11 14:02:42.614667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.419 [2024-07-11 14:02:42.614741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.419 [2024-07-11 14:02:42.614758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.419 [2024-07-11 14:02:42.614766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.419 [2024-07-11 14:02:42.614772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.419 [2024-07-11 14:02:42.614786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.419 qpair failed and we were unable to recover it. 00:32:40.419 [2024-07-11 14:02:42.624778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.419 [2024-07-11 14:02:42.624844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.419 [2024-07-11 14:02:42.624860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.419 [2024-07-11 14:02:42.624867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.419 [2024-07-11 14:02:42.624873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.419 [2024-07-11 14:02:42.624887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.419 qpair failed and we were unable to recover it. 00:32:40.419 [2024-07-11 14:02:42.634861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.419 [2024-07-11 14:02:42.634966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.419 [2024-07-11 14:02:42.634983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.419 [2024-07-11 14:02:42.634990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.419 [2024-07-11 14:02:42.634999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.419 [2024-07-11 14:02:42.635013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.419 qpair failed and we were unable to recover it. 00:32:40.419 [2024-07-11 14:02:42.644856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.419 [2024-07-11 14:02:42.644944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.419 [2024-07-11 14:02:42.644961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.419 [2024-07-11 14:02:42.644968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.419 [2024-07-11 14:02:42.644974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.419 [2024-07-11 14:02:42.644988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.419 qpair failed and we were unable to recover it. 00:32:40.419 [2024-07-11 14:02:42.654911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.419 [2024-07-11 14:02:42.655022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.419 [2024-07-11 14:02:42.655039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.419 [2024-07-11 14:02:42.655046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.419 [2024-07-11 14:02:42.655052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.419 [2024-07-11 14:02:42.655065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.419 qpair failed and we were unable to recover it. 00:32:40.419 [2024-07-11 14:02:42.664936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.419 [2024-07-11 14:02:42.665002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.419 [2024-07-11 14:02:42.665021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.419 [2024-07-11 14:02:42.665028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.419 [2024-07-11 14:02:42.665034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.419 [2024-07-11 14:02:42.665048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.419 qpair failed and we were unable to recover it. 00:32:40.419 [2024-07-11 14:02:42.674934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.419 [2024-07-11 14:02:42.675007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.419 [2024-07-11 14:02:42.675024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.419 [2024-07-11 14:02:42.675031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.419 [2024-07-11 14:02:42.675037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.419 [2024-07-11 14:02:42.675051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.419 qpair failed and we were unable to recover it. 00:32:40.419 [2024-07-11 14:02:42.684964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.419 [2024-07-11 14:02:42.685032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.419 [2024-07-11 14:02:42.685048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.419 [2024-07-11 14:02:42.685054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.419 [2024-07-11 14:02:42.685060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.419 [2024-07-11 14:02:42.685074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.419 qpair failed and we were unable to recover it. 00:32:40.419 [2024-07-11 14:02:42.694981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.419 [2024-07-11 14:02:42.695051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.419 [2024-07-11 14:02:42.695067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.419 [2024-07-11 14:02:42.695074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.419 [2024-07-11 14:02:42.695080] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.419 [2024-07-11 14:02:42.695093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.419 qpair failed and we were unable to recover it. 00:32:40.419 [2024-07-11 14:02:42.705020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.419 [2024-07-11 14:02:42.705100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.419 [2024-07-11 14:02:42.705116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.419 [2024-07-11 14:02:42.705123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.419 [2024-07-11 14:02:42.705129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.419 [2024-07-11 14:02:42.705143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.419 qpair failed and we were unable to recover it. 00:32:40.419 [2024-07-11 14:02:42.715062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.419 [2024-07-11 14:02:42.715131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.419 [2024-07-11 14:02:42.715148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.419 [2024-07-11 14:02:42.715154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.419 [2024-07-11 14:02:42.715163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.715178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.725076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.725143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.725163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.725173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.725179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.725193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.735109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.735179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.735195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.735202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.735208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.735223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.745144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.745209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.745225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.745232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.745238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.745253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.755196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.755300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.755317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.755324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.755330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.755344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.765198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.765266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.765282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.765290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.765296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.765310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.775154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.775229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.775244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.775251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.775257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.775271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.785261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.785332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.785348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.785355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.785361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.785375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.795275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.795343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.795359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.795365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.795371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.795385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.805327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.805395] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.805411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.805418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.805424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.805437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.815345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.815455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.815471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.815481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.815487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.815502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.825428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.825537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.825553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.825560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.825566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.825580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.835418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.835482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.835499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.835506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.835513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.835527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.845450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.845543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.845559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.845566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.845573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.845587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.855475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.855544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.420 [2024-07-11 14:02:42.855560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.420 [2024-07-11 14:02:42.855566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.420 [2024-07-11 14:02:42.855572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.420 [2024-07-11 14:02:42.855586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.420 qpair failed and we were unable to recover it. 00:32:40.420 [2024-07-11 14:02:42.865505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.420 [2024-07-11 14:02:42.865570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.421 [2024-07-11 14:02:42.865589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.421 [2024-07-11 14:02:42.865595] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.421 [2024-07-11 14:02:42.865602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.421 [2024-07-11 14:02:42.865616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.421 qpair failed and we were unable to recover it. 00:32:40.681 [2024-07-11 14:02:42.875550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.681 [2024-07-11 14:02:42.875617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.681 [2024-07-11 14:02:42.875636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.681 [2024-07-11 14:02:42.875643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.681 [2024-07-11 14:02:42.875649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.681 [2024-07-11 14:02:42.875663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.681 qpair failed and we were unable to recover it. 00:32:40.681 [2024-07-11 14:02:42.885572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.681 [2024-07-11 14:02:42.885638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.681 [2024-07-11 14:02:42.885654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.681 [2024-07-11 14:02:42.885661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.681 [2024-07-11 14:02:42.885672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.681 [2024-07-11 14:02:42.885686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.681 qpair failed and we were unable to recover it. 00:32:40.681 [2024-07-11 14:02:42.895607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.681 [2024-07-11 14:02:42.895693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.681 [2024-07-11 14:02:42.895710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.681 [2024-07-11 14:02:42.895717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.681 [2024-07-11 14:02:42.895723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.681 [2024-07-11 14:02:42.895737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.681 qpair failed and we were unable to recover it. 00:32:40.681 [2024-07-11 14:02:42.905644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.681 [2024-07-11 14:02:42.905704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.681 [2024-07-11 14:02:42.905720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.681 [2024-07-11 14:02:42.905730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.681 [2024-07-11 14:02:42.905735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.681 [2024-07-11 14:02:42.905749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.681 qpair failed and we were unable to recover it. 00:32:40.681 [2024-07-11 14:02:42.915668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.681 [2024-07-11 14:02:42.915732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.681 [2024-07-11 14:02:42.915748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.681 [2024-07-11 14:02:42.915755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.681 [2024-07-11 14:02:42.915761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.681 [2024-07-11 14:02:42.915779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.681 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:42.925697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:42.925808] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:42.925825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:42.925832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:42.925838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:42.925853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:42.935695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:42.935808] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:42.935824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:42.935831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:42.935838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:42.935852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:42.945736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:42.945835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:42.945852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:42.945859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:42.945865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:42.945878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:42.955761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:42.955831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:42.955848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:42.955855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:42.955861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:42.955875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:42.965794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:42.965860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:42.965876] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:42.965883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:42.965892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:42.965906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:42.975818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:42.975890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:42.975907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:42.975914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:42.975920] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:42.975933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:42.985805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:42.985871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:42.985888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:42.985895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:42.985901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:42.985915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:42.995797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:42.995866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:42.995883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:42.995893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:42.995899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:42.995913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:43.005905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:43.005971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:43.005988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:43.005995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:43.006001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:43.006015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:43.015944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:43.016019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:43.016036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:43.016043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:43.016049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:43.016063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:43.026027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:43.026114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:43.026133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:43.026140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:43.026146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:43.026166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:43.035919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:43.035988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:43.036006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:43.036012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:43.036018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:43.036033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:43.046027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:43.046103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:43.046120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:43.046127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:43.046133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:43.046148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:43.056052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:43.056120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:43.056139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:43.056146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.682 [2024-07-11 14:02:43.056152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.682 [2024-07-11 14:02:43.056171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.682 qpair failed and we were unable to recover it. 00:32:40.682 [2024-07-11 14:02:43.066131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.682 [2024-07-11 14:02:43.066207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.682 [2024-07-11 14:02:43.066225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.682 [2024-07-11 14:02:43.066231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.683 [2024-07-11 14:02:43.066238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.683 [2024-07-11 14:02:43.066253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.683 qpair failed and we were unable to recover it. 00:32:40.683 [2024-07-11 14:02:43.076125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.683 [2024-07-11 14:02:43.076195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.683 [2024-07-11 14:02:43.076212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.683 [2024-07-11 14:02:43.076219] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.683 [2024-07-11 14:02:43.076225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.683 [2024-07-11 14:02:43.076239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.683 qpair failed and we were unable to recover it. 00:32:40.683 [2024-07-11 14:02:43.086149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.683 [2024-07-11 14:02:43.086255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.683 [2024-07-11 14:02:43.086272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.683 [2024-07-11 14:02:43.086287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.683 [2024-07-11 14:02:43.086293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.683 [2024-07-11 14:02:43.086308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.683 qpair failed and we were unable to recover it. 00:32:40.683 [2024-07-11 14:02:43.096186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.683 [2024-07-11 14:02:43.096256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.683 [2024-07-11 14:02:43.096273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.683 [2024-07-11 14:02:43.096280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.683 [2024-07-11 14:02:43.096286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.683 [2024-07-11 14:02:43.096300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.683 qpair failed and we were unable to recover it. 00:32:40.683 [2024-07-11 14:02:43.106199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.683 [2024-07-11 14:02:43.106264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.683 [2024-07-11 14:02:43.106279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.683 [2024-07-11 14:02:43.106286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.683 [2024-07-11 14:02:43.106293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.683 [2024-07-11 14:02:43.106311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.683 qpair failed and we were unable to recover it. 00:32:40.683 [2024-07-11 14:02:43.116246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.683 [2024-07-11 14:02:43.116346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.683 [2024-07-11 14:02:43.116363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.683 [2024-07-11 14:02:43.116370] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.683 [2024-07-11 14:02:43.116376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.683 [2024-07-11 14:02:43.116390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.683 qpair failed and we were unable to recover it. 00:32:40.683 [2024-07-11 14:02:43.126229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.683 [2024-07-11 14:02:43.126297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.683 [2024-07-11 14:02:43.126313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.683 [2024-07-11 14:02:43.126320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.683 [2024-07-11 14:02:43.126326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.683 [2024-07-11 14:02:43.126344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.683 qpair failed and we were unable to recover it. 00:32:40.944 [2024-07-11 14:02:43.136256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.944 [2024-07-11 14:02:43.136331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.944 [2024-07-11 14:02:43.136348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.944 [2024-07-11 14:02:43.136355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.944 [2024-07-11 14:02:43.136361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.944 [2024-07-11 14:02:43.136375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.944 qpair failed and we were unable to recover it. 00:32:40.944 [2024-07-11 14:02:43.146257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.944 [2024-07-11 14:02:43.146323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.944 [2024-07-11 14:02:43.146341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.944 [2024-07-11 14:02:43.146348] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.944 [2024-07-11 14:02:43.146354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.944 [2024-07-11 14:02:43.146369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.944 qpair failed and we were unable to recover it. 00:32:40.944 [2024-07-11 14:02:43.156274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.944 [2024-07-11 14:02:43.156340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.944 [2024-07-11 14:02:43.156356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.944 [2024-07-11 14:02:43.156368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.944 [2024-07-11 14:02:43.156374] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.944 [2024-07-11 14:02:43.156389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.944 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.166357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.166426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.166443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.166450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.166460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.166475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.176375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.176449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.176469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.176476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.176482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.176496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.186418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.186489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.186505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.186511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.186517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.186531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.196458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.196519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.196535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.196542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.196548] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.196561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.206484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.206551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.206567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.206574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.206580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.206594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.216514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.216582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.216602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.216608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.216614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.216628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.226504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.226570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.226586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.226593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.226602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.226616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.236588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.236657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.236674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.236681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.236687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.236700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.246532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.246600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.246615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.246623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.246633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.246647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.256562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.256633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.256650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.256656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.256662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.256676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.266632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.266695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.266713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.266720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.266726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.266740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.276636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.276702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.276720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.276726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.276732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.276747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.286732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.286801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.286820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.286827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.286833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.286847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.296728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.296792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.296807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.945 [2024-07-11 14:02:43.296814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.945 [2024-07-11 14:02:43.296820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.945 [2024-07-11 14:02:43.296834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.945 qpair failed and we were unable to recover it. 00:32:40.945 [2024-07-11 14:02:43.306773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.945 [2024-07-11 14:02:43.306839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.945 [2024-07-11 14:02:43.306854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.946 [2024-07-11 14:02:43.306861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.946 [2024-07-11 14:02:43.306867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.946 [2024-07-11 14:02:43.306886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.946 qpair failed and we were unable to recover it. 00:32:40.946 [2024-07-11 14:02:43.316817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.946 [2024-07-11 14:02:43.316879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.946 [2024-07-11 14:02:43.316895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.946 [2024-07-11 14:02:43.316901] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.946 [2024-07-11 14:02:43.316907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.946 [2024-07-11 14:02:43.316921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.946 qpair failed and we were unable to recover it. 00:32:40.946 [2024-07-11 14:02:43.326815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.946 [2024-07-11 14:02:43.326894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.946 [2024-07-11 14:02:43.326911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.946 [2024-07-11 14:02:43.326918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.946 [2024-07-11 14:02:43.326924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.946 [2024-07-11 14:02:43.326938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.946 qpair failed and we were unable to recover it. 00:32:40.946 [2024-07-11 14:02:43.336889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.946 [2024-07-11 14:02:43.336954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.946 [2024-07-11 14:02:43.336971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.946 [2024-07-11 14:02:43.336978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.946 [2024-07-11 14:02:43.336985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.946 [2024-07-11 14:02:43.336999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.946 qpair failed and we were unable to recover it. 00:32:40.946 [2024-07-11 14:02:43.346840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.946 [2024-07-11 14:02:43.346903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.946 [2024-07-11 14:02:43.346920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.946 [2024-07-11 14:02:43.346926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.946 [2024-07-11 14:02:43.346932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.946 [2024-07-11 14:02:43.346946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.946 qpair failed and we were unable to recover it. 00:32:40.946 [2024-07-11 14:02:43.356902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.946 [2024-07-11 14:02:43.356971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.946 [2024-07-11 14:02:43.356991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.946 [2024-07-11 14:02:43.356998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.946 [2024-07-11 14:02:43.357005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.946 [2024-07-11 14:02:43.357019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.946 qpair failed and we were unable to recover it. 00:32:40.946 [2024-07-11 14:02:43.366990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.946 [2024-07-11 14:02:43.367105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.946 [2024-07-11 14:02:43.367122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.946 [2024-07-11 14:02:43.367129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.946 [2024-07-11 14:02:43.367135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.946 [2024-07-11 14:02:43.367150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.946 qpair failed and we were unable to recover it. 00:32:40.946 [2024-07-11 14:02:43.376926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.946 [2024-07-11 14:02:43.377018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.946 [2024-07-11 14:02:43.377035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.946 [2024-07-11 14:02:43.377042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.946 [2024-07-11 14:02:43.377048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.946 [2024-07-11 14:02:43.377063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.946 qpair failed and we were unable to recover it. 00:32:40.946 [2024-07-11 14:02:43.387032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.946 [2024-07-11 14:02:43.387101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.946 [2024-07-11 14:02:43.387118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.946 [2024-07-11 14:02:43.387125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.946 [2024-07-11 14:02:43.387131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.946 [2024-07-11 14:02:43.387145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.946 qpair failed and we were unable to recover it. 00:32:40.946 [2024-07-11 14:02:43.396984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:40.946 [2024-07-11 14:02:43.397052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:40.946 [2024-07-11 14:02:43.397071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:40.946 [2024-07-11 14:02:43.397078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:40.946 [2024-07-11 14:02:43.397084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:40.946 [2024-07-11 14:02:43.397102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:40.946 qpair failed and we were unable to recover it. 00:32:41.207 [2024-07-11 14:02:43.407037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.207 [2024-07-11 14:02:43.407102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.207 [2024-07-11 14:02:43.407118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.207 [2024-07-11 14:02:43.407125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.207 [2024-07-11 14:02:43.407131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.207 [2024-07-11 14:02:43.407145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.207 qpair failed and we were unable to recover it. 00:32:41.207 [2024-07-11 14:02:43.417022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.207 [2024-07-11 14:02:43.417091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.207 [2024-07-11 14:02:43.417108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.207 [2024-07-11 14:02:43.417115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.207 [2024-07-11 14:02:43.417121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.207 [2024-07-11 14:02:43.417135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.207 qpair failed and we were unable to recover it. 00:32:41.207 [2024-07-11 14:02:43.427096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.207 [2024-07-11 14:02:43.427169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.207 [2024-07-11 14:02:43.427185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.207 [2024-07-11 14:02:43.427192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.207 [2024-07-11 14:02:43.427198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.207 [2024-07-11 14:02:43.427211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.207 qpair failed and we were unable to recover it. 00:32:41.207 [2024-07-11 14:02:43.437165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.207 [2024-07-11 14:02:43.437237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.207 [2024-07-11 14:02:43.437253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.207 [2024-07-11 14:02:43.437259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.207 [2024-07-11 14:02:43.437266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.207 [2024-07-11 14:02:43.437280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.207 qpair failed and we were unable to recover it. 00:32:41.207 [2024-07-11 14:02:43.447176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.207 [2024-07-11 14:02:43.447243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.207 [2024-07-11 14:02:43.447266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.207 [2024-07-11 14:02:43.447273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.207 [2024-07-11 14:02:43.447278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.207 [2024-07-11 14:02:43.447293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.207 qpair failed and we were unable to recover it. 00:32:41.207 [2024-07-11 14:02:43.457192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.207 [2024-07-11 14:02:43.457285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.207 [2024-07-11 14:02:43.457302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.207 [2024-07-11 14:02:43.457309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.207 [2024-07-11 14:02:43.457315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.207 [2024-07-11 14:02:43.457329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.207 qpair failed and we were unable to recover it. 00:32:41.207 [2024-07-11 14:02:43.467217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.207 [2024-07-11 14:02:43.467287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.207 [2024-07-11 14:02:43.467304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.207 [2024-07-11 14:02:43.467310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.207 [2024-07-11 14:02:43.467316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.207 [2024-07-11 14:02:43.467330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.207 qpair failed and we were unable to recover it. 00:32:41.207 [2024-07-11 14:02:43.477230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.207 [2024-07-11 14:02:43.477339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.207 [2024-07-11 14:02:43.477356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.207 [2024-07-11 14:02:43.477363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.207 [2024-07-11 14:02:43.477369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.207 [2024-07-11 14:02:43.477383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.207 qpair failed and we were unable to recover it. 00:32:41.207 [2024-07-11 14:02:43.487292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.207 [2024-07-11 14:02:43.487357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.207 [2024-07-11 14:02:43.487372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.207 [2024-07-11 14:02:43.487379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.207 [2024-07-11 14:02:43.487385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.207 [2024-07-11 14:02:43.487406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.207 qpair failed and we were unable to recover it. 00:32:41.207 [2024-07-11 14:02:43.497309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.207 [2024-07-11 14:02:43.497379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.207 [2024-07-11 14:02:43.497396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.207 [2024-07-11 14:02:43.497402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.207 [2024-07-11 14:02:43.497409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.207 [2024-07-11 14:02:43.497423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.207 qpair failed and we were unable to recover it. 00:32:41.207 [2024-07-11 14:02:43.507339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.207 [2024-07-11 14:02:43.507418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.207 [2024-07-11 14:02:43.507434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.207 [2024-07-11 14:02:43.507441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.207 [2024-07-11 14:02:43.507447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.507461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.517318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.517383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.517398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.517405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.517410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.517424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.527470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.527537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.527552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.527559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.527565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.527580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.537376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.537443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.537463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.537469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.537475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.537489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.547438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.547507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.547523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.547530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.547536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.547554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.557507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.557588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.557605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.557612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.557618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.557633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.567556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.567626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.567643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.567650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.567656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.567670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.577572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.577678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.577694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.577701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.577710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.577726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.587517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.587578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.587594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.587600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.587606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.587620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.597628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.597739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.597755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.597762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.597768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.597782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.607640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.607708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.607727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.607734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.607740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.607754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.617668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.617738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.617755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.617762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.617768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.617782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.627708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.627805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.627824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.627831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.627837] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.627851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.637753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.637811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.637827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.637834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.637841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.637855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.647768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.647834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.208 [2024-07-11 14:02:43.647850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.208 [2024-07-11 14:02:43.647857] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.208 [2024-07-11 14:02:43.647863] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.208 [2024-07-11 14:02:43.647880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.208 qpair failed and we were unable to recover it. 00:32:41.208 [2024-07-11 14:02:43.657757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.208 [2024-07-11 14:02:43.657825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.209 [2024-07-11 14:02:43.657841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.209 [2024-07-11 14:02:43.657848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.209 [2024-07-11 14:02:43.657858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.209 [2024-07-11 14:02:43.657871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.209 qpair failed and we were unable to recover it. 00:32:41.468 [2024-07-11 14:02:43.667758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.468 [2024-07-11 14:02:43.667822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.468 [2024-07-11 14:02:43.667838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.468 [2024-07-11 14:02:43.667846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.468 [2024-07-11 14:02:43.667859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.468 [2024-07-11 14:02:43.667873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.468 qpair failed and we were unable to recover it. 00:32:41.468 [2024-07-11 14:02:43.677860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.468 [2024-07-11 14:02:43.677925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.468 [2024-07-11 14:02:43.677944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.468 [2024-07-11 14:02:43.677951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.468 [2024-07-11 14:02:43.677957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.468 [2024-07-11 14:02:43.677971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.468 qpair failed and we were unable to recover it. 00:32:41.468 [2024-07-11 14:02:43.687890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.468 [2024-07-11 14:02:43.687956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.468 [2024-07-11 14:02:43.687972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.468 [2024-07-11 14:02:43.687978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.468 [2024-07-11 14:02:43.687984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.468 [2024-07-11 14:02:43.687998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.468 qpair failed and we were unable to recover it. 00:32:41.468 [2024-07-11 14:02:43.697902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.468 [2024-07-11 14:02:43.697972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.468 [2024-07-11 14:02:43.697989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.468 [2024-07-11 14:02:43.697996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.468 [2024-07-11 14:02:43.698001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.468 [2024-07-11 14:02:43.698016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.468 qpair failed and we were unable to recover it. 00:32:41.468 [2024-07-11 14:02:43.707933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.468 [2024-07-11 14:02:43.707995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.468 [2024-07-11 14:02:43.708010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.468 [2024-07-11 14:02:43.708017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.468 [2024-07-11 14:02:43.708023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.468 [2024-07-11 14:02:43.708037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.468 qpair failed and we were unable to recover it. 00:32:41.468 [2024-07-11 14:02:43.718010] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.468 [2024-07-11 14:02:43.718098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.468 [2024-07-11 14:02:43.718114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.468 [2024-07-11 14:02:43.718121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.468 [2024-07-11 14:02:43.718127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.468 [2024-07-11 14:02:43.718141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.468 qpair failed and we were unable to recover it. 00:32:41.468 [2024-07-11 14:02:43.728003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.468 [2024-07-11 14:02:43.728080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.468 [2024-07-11 14:02:43.728096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.468 [2024-07-11 14:02:43.728103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.468 [2024-07-11 14:02:43.728109] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.468 [2024-07-11 14:02:43.728123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.468 qpair failed and we were unable to recover it. 00:32:41.468 [2024-07-11 14:02:43.737984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.468 [2024-07-11 14:02:43.738054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.468 [2024-07-11 14:02:43.738071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.468 [2024-07-11 14:02:43.738078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.468 [2024-07-11 14:02:43.738084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.468 [2024-07-11 14:02:43.738097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.468 qpair failed and we were unable to recover it. 00:32:41.468 [2024-07-11 14:02:43.747986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.468 [2024-07-11 14:02:43.748078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.468 [2024-07-11 14:02:43.748094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.468 [2024-07-11 14:02:43.748100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.468 [2024-07-11 14:02:43.748106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.468 [2024-07-11 14:02:43.748121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.468 qpair failed and we were unable to recover it. 00:32:41.468 [2024-07-11 14:02:43.758118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.468 [2024-07-11 14:02:43.758221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.468 [2024-07-11 14:02:43.758238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.468 [2024-07-11 14:02:43.758245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.468 [2024-07-11 14:02:43.758254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.468 [2024-07-11 14:02:43.758268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.468 qpair failed and we were unable to recover it. 00:32:41.468 [2024-07-11 14:02:43.768050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.468 [2024-07-11 14:02:43.768116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.468 [2024-07-11 14:02:43.768135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.468 [2024-07-11 14:02:43.768142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.768148] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.768171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.778154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.778258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.778274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.778281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.778287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.778302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.788177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.788248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.788263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.788270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.788276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.788290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.798195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.798263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.798281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.798287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.798294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.798308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.808228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.808325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.808342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.808348] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.808354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.808369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.818254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.818334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.818351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.818358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.818364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.818378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.828284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.828390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.828406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.828413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.828419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.828433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.838329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.838397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.838413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.838420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.838426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.838440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.848364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.848430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.848446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.848453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.848465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.848479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.858333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.858399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.858415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.858422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.858430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.858444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.868410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.868489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.868506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.868512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.868518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.868532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.878475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.878544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.878561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.878568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.878574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.878588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.888474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.888540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.888556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.888562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.888569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.888583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.898495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.898567] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.898584] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.898591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.898598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.898612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.908530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.469 [2024-07-11 14:02:43.908600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.469 [2024-07-11 14:02:43.908617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.469 [2024-07-11 14:02:43.908624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.469 [2024-07-11 14:02:43.908630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.469 [2024-07-11 14:02:43.908644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.469 qpair failed and we were unable to recover it. 00:32:41.469 [2024-07-11 14:02:43.918561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.470 [2024-07-11 14:02:43.918629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.470 [2024-07-11 14:02:43.918645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.470 [2024-07-11 14:02:43.918652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.470 [2024-07-11 14:02:43.918658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.470 [2024-07-11 14:02:43.918672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.470 qpair failed and we were unable to recover it. 00:32:41.730 [2024-07-11 14:02:43.928539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.730 [2024-07-11 14:02:43.928606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.730 [2024-07-11 14:02:43.928621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.730 [2024-07-11 14:02:43.928629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.730 [2024-07-11 14:02:43.928635] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.730 [2024-07-11 14:02:43.928649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.730 qpair failed and we were unable to recover it. 00:32:41.730 [2024-07-11 14:02:43.938611] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.730 [2024-07-11 14:02:43.938688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.730 [2024-07-11 14:02:43.938705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.730 [2024-07-11 14:02:43.938716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.730 [2024-07-11 14:02:43.938722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.730 [2024-07-11 14:02:43.938737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.730 qpair failed and we were unable to recover it. 00:32:41.730 [2024-07-11 14:02:43.948644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.730 [2024-07-11 14:02:43.948705] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.730 [2024-07-11 14:02:43.948721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.730 [2024-07-11 14:02:43.948728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.730 [2024-07-11 14:02:43.948734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.730 [2024-07-11 14:02:43.948748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.730 qpair failed and we were unable to recover it. 00:32:41.730 [2024-07-11 14:02:43.958666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.730 [2024-07-11 14:02:43.958730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.730 [2024-07-11 14:02:43.958746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.730 [2024-07-11 14:02:43.958753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.730 [2024-07-11 14:02:43.958759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.730 [2024-07-11 14:02:43.958777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.730 qpair failed and we were unable to recover it. 00:32:41.730 [2024-07-11 14:02:43.968701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.730 [2024-07-11 14:02:43.968816] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.730 [2024-07-11 14:02:43.968833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.730 [2024-07-11 14:02:43.968840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.730 [2024-07-11 14:02:43.968846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.730 [2024-07-11 14:02:43.968860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.730 qpair failed and we were unable to recover it. 00:32:41.730 [2024-07-11 14:02:43.978735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.730 [2024-07-11 14:02:43.978801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.730 [2024-07-11 14:02:43.978816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.730 [2024-07-11 14:02:43.978823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.730 [2024-07-11 14:02:43.978829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.730 [2024-07-11 14:02:43.978843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.730 qpair failed and we were unable to recover it. 00:32:41.730 [2024-07-11 14:02:43.988745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.730 [2024-07-11 14:02:43.988812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.730 [2024-07-11 14:02:43.988829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.730 [2024-07-11 14:02:43.988835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.730 [2024-07-11 14:02:43.988841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.730 [2024-07-11 14:02:43.988855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.730 qpair failed and we were unable to recover it. 00:32:41.730 [2024-07-11 14:02:43.998775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.730 [2024-07-11 14:02:43.998843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.730 [2024-07-11 14:02:43.998860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.730 [2024-07-11 14:02:43.998866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.730 [2024-07-11 14:02:43.998872] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.730 [2024-07-11 14:02:43.998886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.730 qpair failed and we were unable to recover it. 00:32:41.730 [2024-07-11 14:02:44.008825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.730 [2024-07-11 14:02:44.008904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.730 [2024-07-11 14:02:44.008922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.730 [2024-07-11 14:02:44.008929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.730 [2024-07-11 14:02:44.008935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.730 [2024-07-11 14:02:44.008950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.730 qpair failed and we were unable to recover it. 00:32:41.730 [2024-07-11 14:02:44.018840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.018903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.018919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.018926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.018932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.731 [2024-07-11 14:02:44.018946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.731 qpair failed and we were unable to recover it. 00:32:41.731 [2024-07-11 14:02:44.028914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.028981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.029000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.029011] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.029018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.731 [2024-07-11 14:02:44.029031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.731 qpair failed and we were unable to recover it. 00:32:41.731 [2024-07-11 14:02:44.038918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.038990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.039006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.039013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.039019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.731 [2024-07-11 14:02:44.039034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.731 qpair failed and we were unable to recover it. 00:32:41.731 [2024-07-11 14:02:44.048946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.049010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.049027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.049034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.049044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.731 [2024-07-11 14:02:44.049058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.731 qpair failed and we were unable to recover it. 00:32:41.731 [2024-07-11 14:02:44.058966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.059044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.059061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.059068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.059074] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.731 [2024-07-11 14:02:44.059088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.731 qpair failed and we were unable to recover it. 00:32:41.731 [2024-07-11 14:02:44.068991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.069055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.069071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.069077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.069083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.731 [2024-07-11 14:02:44.069097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.731 qpair failed and we were unable to recover it. 00:32:41.731 [2024-07-11 14:02:44.079076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.079143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.079164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.079172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.079178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.731 [2024-07-11 14:02:44.079193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.731 qpair failed and we were unable to recover it. 00:32:41.731 [2024-07-11 14:02:44.089060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.089141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.089161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.089168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.089174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.731 [2024-07-11 14:02:44.089189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.731 qpair failed and we were unable to recover it. 00:32:41.731 [2024-07-11 14:02:44.099083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.099171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.099188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.099195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.099201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.731 [2024-07-11 14:02:44.099215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.731 qpair failed and we were unable to recover it. 00:32:41.731 [2024-07-11 14:02:44.109150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.109250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.109266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.109273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.109279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.731 [2024-07-11 14:02:44.109293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.731 qpair failed and we were unable to recover it. 00:32:41.731 [2024-07-11 14:02:44.119135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.119217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.119234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.119244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.119250] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.731 [2024-07-11 14:02:44.119264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.731 qpair failed and we were unable to recover it. 00:32:41.731 [2024-07-11 14:02:44.129207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.129318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.129335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.129342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.129348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.731 [2024-07-11 14:02:44.129363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.731 qpair failed and we were unable to recover it. 00:32:41.731 [2024-07-11 14:02:44.139220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.731 [2024-07-11 14:02:44.139287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.731 [2024-07-11 14:02:44.139306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.731 [2024-07-11 14:02:44.139313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.731 [2024-07-11 14:02:44.139319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.732 [2024-07-11 14:02:44.139333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.732 qpair failed and we were unable to recover it. 00:32:41.732 [2024-07-11 14:02:44.149134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.732 [2024-07-11 14:02:44.149205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.732 [2024-07-11 14:02:44.149223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.732 [2024-07-11 14:02:44.149229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.732 [2024-07-11 14:02:44.149236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.732 [2024-07-11 14:02:44.149249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.732 qpair failed and we were unable to recover it. 00:32:41.732 [2024-07-11 14:02:44.159252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.732 [2024-07-11 14:02:44.159323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.732 [2024-07-11 14:02:44.159340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.732 [2024-07-11 14:02:44.159347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.732 [2024-07-11 14:02:44.159353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.732 [2024-07-11 14:02:44.159367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.732 qpair failed and we were unable to recover it. 00:32:41.732 [2024-07-11 14:02:44.169316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.732 [2024-07-11 14:02:44.169385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.732 [2024-07-11 14:02:44.169405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.732 [2024-07-11 14:02:44.169411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.732 [2024-07-11 14:02:44.169417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.732 [2024-07-11 14:02:44.169431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.732 qpair failed and we were unable to recover it. 00:32:41.732 [2024-07-11 14:02:44.179275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.732 [2024-07-11 14:02:44.179339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.732 [2024-07-11 14:02:44.179354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.732 [2024-07-11 14:02:44.179361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.732 [2024-07-11 14:02:44.179367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.732 [2024-07-11 14:02:44.179381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.732 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-11 14:02:44.189278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.993 [2024-07-11 14:02:44.189343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.993 [2024-07-11 14:02:44.189358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.993 [2024-07-11 14:02:44.189366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.993 [2024-07-11 14:02:44.189371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.993 [2024-07-11 14:02:44.189386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.993 qpair failed and we were unable to recover it. 00:32:41.993 [2024-07-11 14:02:44.199364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.993 [2024-07-11 14:02:44.199477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.199494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.199500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.199507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.199520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.209423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.209531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.209548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.209558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.209564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.209578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.219415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.219489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.219505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.219512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.219518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.219532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.229430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.229492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.229508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.229515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.229521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.229534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.239526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.239636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.239652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.239659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.239665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.239679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.249530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.249598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.249615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.249622] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.249628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.249643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.259543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.259617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.259635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.259642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.259648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.259663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.269553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.269618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.269634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.269646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.269654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.269669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.279583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.279649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.279669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.279675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.279682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.279696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.289653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.289718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.289734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.289741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.289747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.289761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.299664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.299733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.299749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.299775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.299782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.299795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.309698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.309781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.309798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.309805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.309811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.309825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.319744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.319847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.319864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.319870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.319877] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.319891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.329774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.329869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.329886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.329893] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.329899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.994 [2024-07-11 14:02:44.329912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.994 qpair failed and we were unable to recover it. 00:32:41.994 [2024-07-11 14:02:44.339772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.994 [2024-07-11 14:02:44.339844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.994 [2024-07-11 14:02:44.339860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.994 [2024-07-11 14:02:44.339867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.994 [2024-07-11 14:02:44.339874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.995 [2024-07-11 14:02:44.339888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-11 14:02:44.349810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.995 [2024-07-11 14:02:44.349876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.995 [2024-07-11 14:02:44.349892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.995 [2024-07-11 14:02:44.349898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.995 [2024-07-11 14:02:44.349904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.995 [2024-07-11 14:02:44.349918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-11 14:02:44.359814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.995 [2024-07-11 14:02:44.359879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.995 [2024-07-11 14:02:44.359896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.995 [2024-07-11 14:02:44.359902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.995 [2024-07-11 14:02:44.359909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.995 [2024-07-11 14:02:44.359923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-11 14:02:44.369889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.995 [2024-07-11 14:02:44.369963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.995 [2024-07-11 14:02:44.369981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.995 [2024-07-11 14:02:44.369988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.995 [2024-07-11 14:02:44.369994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.995 [2024-07-11 14:02:44.370009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-11 14:02:44.379891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.995 [2024-07-11 14:02:44.379962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.995 [2024-07-11 14:02:44.379978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.995 [2024-07-11 14:02:44.379985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.995 [2024-07-11 14:02:44.379991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.995 [2024-07-11 14:02:44.380005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-11 14:02:44.389950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.995 [2024-07-11 14:02:44.390018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.995 [2024-07-11 14:02:44.390039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.995 [2024-07-11 14:02:44.390046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.995 [2024-07-11 14:02:44.390052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.995 [2024-07-11 14:02:44.390066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-11 14:02:44.399942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.995 [2024-07-11 14:02:44.400032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.995 [2024-07-11 14:02:44.400049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.995 [2024-07-11 14:02:44.400056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.995 [2024-07-11 14:02:44.400062] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.995 [2024-07-11 14:02:44.400076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-11 14:02:44.409984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.995 [2024-07-11 14:02:44.410054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.995 [2024-07-11 14:02:44.410070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.995 [2024-07-11 14:02:44.410077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.995 [2024-07-11 14:02:44.410083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.995 [2024-07-11 14:02:44.410097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-11 14:02:44.420084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.995 [2024-07-11 14:02:44.420151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.995 [2024-07-11 14:02:44.420171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.995 [2024-07-11 14:02:44.420178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.995 [2024-07-11 14:02:44.420183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.995 [2024-07-11 14:02:44.420197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-11 14:02:44.430043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.995 [2024-07-11 14:02:44.430111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.995 [2024-07-11 14:02:44.430127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.995 [2024-07-11 14:02:44.430134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.995 [2024-07-11 14:02:44.430140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.995 [2024-07-11 14:02:44.430154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.995 qpair failed and we were unable to recover it. 00:32:41.995 [2024-07-11 14:02:44.440077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:41.995 [2024-07-11 14:02:44.440144] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:41.995 [2024-07-11 14:02:44.440163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:41.995 [2024-07-11 14:02:44.440170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:41.995 [2024-07-11 14:02:44.440176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:41.995 [2024-07-11 14:02:44.440191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:41.995 qpair failed and we were unable to recover it. 00:32:42.256 [2024-07-11 14:02:44.450118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.256 [2024-07-11 14:02:44.450227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.256 [2024-07-11 14:02:44.450244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.256 [2024-07-11 14:02:44.450252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.256 [2024-07-11 14:02:44.450259] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.256 [2024-07-11 14:02:44.450273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.256 qpair failed and we were unable to recover it. 00:32:42.256 [2024-07-11 14:02:44.460154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.256 [2024-07-11 14:02:44.460224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.256 [2024-07-11 14:02:44.460240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.256 [2024-07-11 14:02:44.460247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.256 [2024-07-11 14:02:44.460253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.256 [2024-07-11 14:02:44.460267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.256 qpair failed and we were unable to recover it. 00:32:42.256 [2024-07-11 14:02:44.470192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.256 [2024-07-11 14:02:44.470258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.256 [2024-07-11 14:02:44.470274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.256 [2024-07-11 14:02:44.470281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.256 [2024-07-11 14:02:44.470287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.256 [2024-07-11 14:02:44.470301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.256 qpair failed and we were unable to recover it. 00:32:42.256 [2024-07-11 14:02:44.480197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.256 [2024-07-11 14:02:44.480259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.256 [2024-07-11 14:02:44.480279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.256 [2024-07-11 14:02:44.480286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.256 [2024-07-11 14:02:44.480292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.256 [2024-07-11 14:02:44.480306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.256 qpair failed and we were unable to recover it. 00:32:42.256 [2024-07-11 14:02:44.490263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.256 [2024-07-11 14:02:44.490329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.256 [2024-07-11 14:02:44.490345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.256 [2024-07-11 14:02:44.490352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.256 [2024-07-11 14:02:44.490358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.256 [2024-07-11 14:02:44.490372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.256 qpair failed and we were unable to recover it. 00:32:42.256 [2024-07-11 14:02:44.500209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.256 [2024-07-11 14:02:44.500284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.256 [2024-07-11 14:02:44.500300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.256 [2024-07-11 14:02:44.500307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.256 [2024-07-11 14:02:44.500313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.256 [2024-07-11 14:02:44.500328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.256 qpair failed and we were unable to recover it. 00:32:42.256 [2024-07-11 14:02:44.510279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.256 [2024-07-11 14:02:44.510357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.256 [2024-07-11 14:02:44.510374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.256 [2024-07-11 14:02:44.510380] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.256 [2024-07-11 14:02:44.510386] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.256 [2024-07-11 14:02:44.510401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.256 qpair failed and we were unable to recover it. 00:32:42.256 [2024-07-11 14:02:44.520327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.256 [2024-07-11 14:02:44.520435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.256 [2024-07-11 14:02:44.520451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.256 [2024-07-11 14:02:44.520458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.256 [2024-07-11 14:02:44.520464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.256 [2024-07-11 14:02:44.520481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.256 qpair failed and we were unable to recover it. 00:32:42.256 [2024-07-11 14:02:44.530366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.530467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.530483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.530490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.530497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.257 [2024-07-11 14:02:44.530511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 [2024-07-11 14:02:44.540352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.540434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.540450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.540457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.540463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.257 [2024-07-11 14:02:44.540478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 [2024-07-11 14:02:44.550380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.550447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.550463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.550470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.550476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.257 [2024-07-11 14:02:44.550490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 [2024-07-11 14:02:44.560339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.560408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.560423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.560430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.560436] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.257 [2024-07-11 14:02:44.560451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 [2024-07-11 14:02:44.570460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.570528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.570551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.570558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.570564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.257 [2024-07-11 14:02:44.570579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 [2024-07-11 14:02:44.580451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.580523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.580539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.580546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.580555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.257 [2024-07-11 14:02:44.580569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 [2024-07-11 14:02:44.590502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.590565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.590581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.590588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.590594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1413710 00:32:42.257 [2024-07-11 14:02:44.590608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 [2024-07-11 14:02:44.600575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.600663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.600691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.600705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.600717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cf8000b90 00:32:42.257 [2024-07-11 14:02:44.600742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 [2024-07-11 14:02:44.610623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.610706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.610723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.610730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.610738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4cf8000b90 00:32:42.257 [2024-07-11 14:02:44.610757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 [2024-07-11 14:02:44.620648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.620732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.620760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.620772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.620783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d08000b90 00:32:42.257 [2024-07-11 14:02:44.620808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 [2024-07-11 14:02:44.630629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.630704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.630722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.630730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.630737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d08000b90 00:32:42.257 [2024-07-11 14:02:44.630753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 [2024-07-11 14:02:44.630893] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:32:42.257 A controller has encountered a failure and is being reset. 00:32:42.257 [2024-07-11 14:02:44.640715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.640823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.640845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.640853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.640859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d00000b90 00:32:42.257 [2024-07-11 14:02:44.640876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 [2024-07-11 14:02:44.650691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:42.257 [2024-07-11 14:02:44.650762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:42.257 [2024-07-11 14:02:44.650779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:42.257 [2024-07-11 14:02:44.650786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:42.257 [2024-07-11 14:02:44.650792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4d00000b90 00:32:42.257 [2024-07-11 14:02:44.650808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:32:42.257 qpair failed and we were unable to recover it. 00:32:42.257 Controller properly reset. 00:32:42.517 Initializing NVMe Controllers 00:32:42.517 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:42.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:42.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:42.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:42.517 Initialization complete. Launching workers. 00:32:42.517 Starting thread on core 1 00:32:42.517 Starting thread on core 2 00:32:42.517 Starting thread on core 3 00:32:42.517 Starting thread on core 0 00:32:42.517 14:02:44 -- host/target_disconnect.sh@59 -- # sync 00:32:42.517 00:32:42.517 real 0m11.407s 00:32:42.517 user 0m21.109s 00:32:42.517 sys 0m4.248s 00:32:42.517 14:02:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:42.517 14:02:44 -- common/autotest_common.sh@10 -- # set +x 00:32:42.517 ************************************ 00:32:42.517 END TEST nvmf_target_disconnect_tc2 00:32:42.517 ************************************ 00:32:42.517 14:02:44 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:32:42.517 14:02:44 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:42.517 14:02:44 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:32:42.517 14:02:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:42.517 14:02:44 -- nvmf/common.sh@116 -- # sync 00:32:42.517 14:02:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:42.517 14:02:44 -- nvmf/common.sh@119 -- # set +e 00:32:42.517 14:02:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:42.517 14:02:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:42.517 rmmod nvme_tcp 00:32:42.517 rmmod nvme_fabrics 00:32:42.517 rmmod nvme_keyring 00:32:42.517 14:02:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:42.517 14:02:44 -- nvmf/common.sh@123 -- # set -e 00:32:42.517 14:02:44 -- nvmf/common.sh@124 -- # return 0 00:32:42.517 14:02:44 -- nvmf/common.sh@477 -- # '[' -n 1790363 ']' 00:32:42.517 14:02:44 -- nvmf/common.sh@478 -- # killprocess 1790363 00:32:42.517 14:02:44 -- common/autotest_common.sh@926 -- # '[' -z 1790363 ']' 00:32:42.517 14:02:44 -- common/autotest_common.sh@930 -- # kill -0 1790363 00:32:42.517 14:02:44 -- common/autotest_common.sh@931 -- # uname 00:32:42.517 14:02:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:42.517 14:02:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1790363 00:32:42.517 14:02:44 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:32:42.517 14:02:44 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:32:42.517 14:02:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1790363' 00:32:42.517 killing process with pid 1790363 00:32:42.517 14:02:44 -- common/autotest_common.sh@945 -- # kill 1790363 00:32:42.517 14:02:44 -- common/autotest_common.sh@950 -- # wait 1790363 00:32:42.777 14:02:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:42.777 14:02:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:42.777 14:02:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:42.777 14:02:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:42.777 14:02:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:42.777 14:02:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.777 14:02:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:42.777 14:02:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.743 14:02:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:44.743 00:32:44.743 real 0m19.348s 00:32:44.743 user 0m48.785s 00:32:44.743 sys 0m8.576s 00:32:44.743 14:02:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:44.743 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:32:44.743 ************************************ 00:32:44.743 END TEST nvmf_target_disconnect 00:32:44.743 ************************************ 00:32:44.743 14:02:47 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:32:44.743 14:02:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:44.743 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:32:44.743 14:02:47 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:32:44.743 00:32:44.743 real 24m22.275s 00:32:44.743 user 66m14.357s 00:32:44.743 sys 6m34.017s 00:32:44.743 14:02:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:44.743 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:32:44.743 ************************************ 00:32:44.743 END TEST nvmf_tcp 00:32:44.743 ************************************ 00:32:45.002 14:02:47 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:32:45.002 14:02:47 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:45.002 14:02:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:45.002 14:02:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:45.002 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:32:45.002 ************************************ 00:32:45.002 START TEST spdkcli_nvmf_tcp 00:32:45.002 ************************************ 00:32:45.002 14:02:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:45.002 * Looking for test storage... 00:32:45.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:45.002 14:02:47 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:45.002 14:02:47 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:45.002 14:02:47 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:45.002 14:02:47 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.002 14:02:47 -- nvmf/common.sh@7 -- # uname -s 00:32:45.002 14:02:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.002 14:02:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.002 14:02:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.002 14:02:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.002 14:02:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.002 14:02:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.002 14:02:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.002 14:02:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.002 14:02:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.002 14:02:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.002 14:02:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:45.002 14:02:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:45.002 14:02:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.002 14:02:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.002 14:02:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.002 14:02:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.002 14:02:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.002 14:02:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.002 14:02:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.002 14:02:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.002 14:02:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.002 14:02:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.002 14:02:47 -- paths/export.sh@5 -- # export PATH 00:32:45.002 14:02:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.002 14:02:47 -- nvmf/common.sh@46 -- # : 0 00:32:45.002 14:02:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:45.002 14:02:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:45.002 14:02:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:45.002 14:02:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.002 14:02:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.002 14:02:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:45.002 14:02:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:45.002 14:02:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:45.002 14:02:47 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:45.002 14:02:47 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:45.002 14:02:47 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:45.002 14:02:47 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:45.002 14:02:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:45.002 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:32:45.002 14:02:47 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:45.002 14:02:47 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1791908 00:32:45.002 14:02:47 -- spdkcli/common.sh@34 -- # waitforlisten 1791908 00:32:45.002 14:02:47 -- common/autotest_common.sh@819 -- # '[' -z 1791908 ']' 00:32:45.002 14:02:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.002 14:02:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:45.002 14:02:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.002 14:02:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:45.002 14:02:47 -- common/autotest_common.sh@10 -- # set +x 00:32:45.002 14:02:47 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:45.002 [2024-07-11 14:02:47.386937] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:45.003 [2024-07-11 14:02:47.386988] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1791908 ] 00:32:45.003 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.003 [2024-07-11 14:02:47.442507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:45.262 [2024-07-11 14:02:47.482046] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:45.262 [2024-07-11 14:02:47.482226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.262 [2024-07-11 14:02:47.482230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.830 14:02:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:45.830 14:02:48 -- common/autotest_common.sh@852 -- # return 0 00:32:45.830 14:02:48 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:45.830 14:02:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:45.830 14:02:48 -- common/autotest_common.sh@10 -- # set +x 00:32:45.830 14:02:48 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:45.830 14:02:48 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:45.830 14:02:48 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:45.830 14:02:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:45.830 14:02:48 -- common/autotest_common.sh@10 -- # set +x 00:32:45.830 14:02:48 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:45.830 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:45.830 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:45.830 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:45.830 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:45.830 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:45.830 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:45.830 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:45.830 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:45.830 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:45.830 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:45.830 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:45.830 ' 00:32:46.398 [2024-07-11 14:02:48.546562] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:48.303 [2024-07-11 14:02:50.595933] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.705 [2024-07-11 14:02:51.771954] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:51.613 [2024-07-11 14:02:53.934916] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:53.517 [2024-07-11 14:02:55.793007] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:54.896 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:54.896 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:54.896 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:54.896 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:54.896 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:54.896 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:54.896 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:54.896 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:54.896 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:54.896 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:54.896 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:54.896 14:02:57 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:54.896 14:02:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:54.896 14:02:57 -- common/autotest_common.sh@10 -- # set +x 00:32:55.155 14:02:57 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:55.155 14:02:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:55.155 14:02:57 -- common/autotest_common.sh@10 -- # set +x 00:32:55.155 14:02:57 -- spdkcli/nvmf.sh@69 -- # check_match 00:32:55.155 14:02:57 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:55.415 14:02:57 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:55.415 14:02:57 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:55.415 14:02:57 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:55.415 14:02:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:55.415 14:02:57 -- common/autotest_common.sh@10 -- # set +x 00:32:55.415 14:02:57 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:55.415 14:02:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:55.415 14:02:57 -- common/autotest_common.sh@10 -- # set +x 00:32:55.415 14:02:57 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:55.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:55.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:55.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:55.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:55.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:55.415 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:55.415 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:55.415 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:55.415 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:55.415 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:55.415 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:55.415 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:55.415 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:55.415 ' 00:33:00.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:00.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:00.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:00.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:00.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:00.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:00.689 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:00.689 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:00.689 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:00.689 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:00.689 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:00.689 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:00.689 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:00.689 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:00.689 14:03:02 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:00.689 14:03:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:00.689 14:03:02 -- common/autotest_common.sh@10 -- # set +x 00:33:00.689 14:03:02 -- spdkcli/nvmf.sh@90 -- # killprocess 1791908 00:33:00.689 14:03:02 -- common/autotest_common.sh@926 -- # '[' -z 1791908 ']' 00:33:00.689 14:03:02 -- common/autotest_common.sh@930 -- # kill -0 1791908 00:33:00.689 14:03:02 -- common/autotest_common.sh@931 -- # uname 00:33:00.689 14:03:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:00.689 14:03:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1791908 00:33:00.689 14:03:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:00.689 14:03:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:00.689 14:03:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1791908' 00:33:00.689 killing process with pid 1791908 00:33:00.689 14:03:02 -- common/autotest_common.sh@945 -- # kill 1791908 00:33:00.689 [2024-07-11 14:03:02.832757] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:00.689 14:03:02 -- common/autotest_common.sh@950 -- # wait 1791908 00:33:00.689 14:03:03 -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:00.689 14:03:03 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:00.689 14:03:03 -- spdkcli/common.sh@13 -- # '[' -n 1791908 ']' 00:33:00.689 14:03:03 -- spdkcli/common.sh@14 -- # killprocess 1791908 00:33:00.689 14:03:03 -- common/autotest_common.sh@926 -- # '[' -z 1791908 ']' 00:33:00.689 14:03:03 -- common/autotest_common.sh@930 -- # kill -0 1791908 00:33:00.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1791908) - No such process 00:33:00.689 14:03:03 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1791908 is not found' 00:33:00.689 Process with pid 1791908 is not found 00:33:00.689 14:03:03 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:00.689 14:03:03 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:00.689 14:03:03 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:00.689 00:33:00.689 real 0m15.771s 00:33:00.689 user 0m32.664s 00:33:00.689 sys 0m0.689s 00:33:00.689 14:03:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:00.689 14:03:03 -- common/autotest_common.sh@10 -- # set +x 00:33:00.689 ************************************ 00:33:00.689 END TEST spdkcli_nvmf_tcp 00:33:00.689 ************************************ 00:33:00.689 14:03:03 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:00.689 14:03:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:00.689 14:03:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:00.689 14:03:03 -- common/autotest_common.sh@10 -- # set +x 00:33:00.689 ************************************ 00:33:00.689 START TEST nvmf_identify_passthru 00:33:00.689 ************************************ 00:33:00.689 14:03:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:00.689 * Looking for test storage... 00:33:00.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:00.689 14:03:03 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:00.689 14:03:03 -- nvmf/common.sh@7 -- # uname -s 00:33:00.689 14:03:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:00.689 14:03:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:00.689 14:03:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:00.689 14:03:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:00.689 14:03:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:00.689 14:03:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:00.689 14:03:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:00.689 14:03:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:00.689 14:03:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:00.689 14:03:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.689 14:03:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:00.689 14:03:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:00.689 14:03:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.689 14:03:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.689 14:03:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:00.689 14:03:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.689 14:03:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.689 14:03:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.689 14:03:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.689 14:03:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.689 14:03:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.948 14:03:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.948 14:03:03 -- paths/export.sh@5 -- # export PATH 00:33:00.948 14:03:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.948 14:03:03 -- nvmf/common.sh@46 -- # : 0 00:33:00.948 14:03:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:00.948 14:03:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:00.948 14:03:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:00.948 14:03:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.948 14:03:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.948 14:03:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:00.948 14:03:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:00.948 14:03:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:00.948 14:03:03 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.948 14:03:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.948 14:03:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.948 14:03:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.948 14:03:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.948 14:03:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.948 14:03:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.948 14:03:03 -- paths/export.sh@5 -- # export PATH 00:33:00.948 14:03:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.948 14:03:03 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:00.948 14:03:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:00.948 14:03:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:00.948 14:03:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:00.948 14:03:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:00.948 14:03:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:00.948 14:03:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.948 14:03:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:00.948 14:03:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.948 14:03:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:00.948 14:03:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:00.948 14:03:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:00.948 14:03:03 -- common/autotest_common.sh@10 -- # set +x 00:33:06.224 14:03:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:06.224 14:03:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:06.224 14:03:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:06.224 14:03:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:06.224 14:03:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:06.224 14:03:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:06.224 14:03:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:06.224 14:03:08 -- nvmf/common.sh@294 -- # net_devs=() 00:33:06.224 14:03:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:06.224 14:03:08 -- nvmf/common.sh@295 -- # e810=() 00:33:06.224 14:03:08 -- nvmf/common.sh@295 -- # local -ga e810 00:33:06.224 14:03:08 -- nvmf/common.sh@296 -- # x722=() 00:33:06.224 14:03:08 -- nvmf/common.sh@296 -- # local -ga x722 00:33:06.224 14:03:08 -- nvmf/common.sh@297 -- # mlx=() 00:33:06.224 14:03:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:06.224 14:03:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.224 14:03:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.224 14:03:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.224 14:03:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.224 14:03:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.224 14:03:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.224 14:03:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.224 14:03:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.224 14:03:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.224 14:03:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.224 14:03:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.224 14:03:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:06.224 14:03:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:06.224 14:03:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:06.224 14:03:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:06.224 14:03:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:06.224 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:06.224 14:03:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:06.224 14:03:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:06.224 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:06.224 14:03:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:06.224 14:03:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:06.224 14:03:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:06.224 14:03:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.224 14:03:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:06.224 14:03:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.224 14:03:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:06.224 Found net devices under 0000:86:00.0: cvl_0_0 00:33:06.224 14:03:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.224 14:03:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:06.224 14:03:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.224 14:03:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:06.224 14:03:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.224 14:03:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:06.224 Found net devices under 0000:86:00.1: cvl_0_1 00:33:06.224 14:03:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.224 14:03:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:06.225 14:03:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:06.225 14:03:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:06.225 14:03:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:06.225 14:03:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:06.225 14:03:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.225 14:03:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.225 14:03:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:06.225 14:03:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:06.225 14:03:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:06.225 14:03:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:06.225 14:03:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:06.225 14:03:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:06.225 14:03:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.225 14:03:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:06.225 14:03:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:06.225 14:03:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:06.225 14:03:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.225 14:03:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.225 14:03:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.225 14:03:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:06.225 14:03:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.225 14:03:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.225 14:03:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.225 14:03:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:06.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:33:06.225 00:33:06.225 --- 10.0.0.2 ping statistics --- 00:33:06.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.225 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:33:06.225 14:03:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:33:06.225 00:33:06.225 --- 10.0.0.1 ping statistics --- 00:33:06.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.225 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:33:06.225 14:03:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.225 14:03:08 -- nvmf/common.sh@410 -- # return 0 00:33:06.225 14:03:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:06.225 14:03:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.225 14:03:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:06.225 14:03:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:06.225 14:03:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.225 14:03:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:06.225 14:03:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:06.225 14:03:08 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:06.225 14:03:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:06.225 14:03:08 -- common/autotest_common.sh@10 -- # set +x 00:33:06.225 14:03:08 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:06.225 14:03:08 -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:06.225 14:03:08 -- common/autotest_common.sh@1509 -- # local bdfs 00:33:06.225 14:03:08 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:06.225 14:03:08 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:06.225 14:03:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:06.225 14:03:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:33:06.225 14:03:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:06.225 14:03:08 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:06.225 14:03:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:06.225 14:03:08 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:06.225 14:03:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:06.225 14:03:08 -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:06.225 14:03:08 -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:06.225 14:03:08 -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:06.225 14:03:08 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:06.225 14:03:08 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:06.225 14:03:08 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:06.225 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.418 14:03:12 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:33:10.418 14:03:12 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:10.418 14:03:12 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:10.418 14:03:12 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:10.418 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.611 14:03:16 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:14.611 14:03:16 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:14.611 14:03:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:14.611 14:03:16 -- common/autotest_common.sh@10 -- # set +x 00:33:14.611 14:03:16 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:14.611 14:03:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:14.611 14:03:16 -- common/autotest_common.sh@10 -- # set +x 00:33:14.611 14:03:16 -- target/identify_passthru.sh@31 -- # nvmfpid=1799517 00:33:14.611 14:03:16 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:14.611 14:03:16 -- target/identify_passthru.sh@35 -- # waitforlisten 1799517 00:33:14.611 14:03:16 -- common/autotest_common.sh@819 -- # '[' -z 1799517 ']' 00:33:14.611 14:03:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.611 14:03:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:14.611 14:03:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.611 14:03:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:14.611 14:03:16 -- common/autotest_common.sh@10 -- # set +x 00:33:14.611 14:03:16 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:14.611 [2024-07-11 14:03:16.771053] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:14.611 [2024-07-11 14:03:16.771098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.611 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.611 [2024-07-11 14:03:16.828691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:14.611 [2024-07-11 14:03:16.868055] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:14.611 [2024-07-11 14:03:16.868186] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.611 [2024-07-11 14:03:16.868196] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.611 [2024-07-11 14:03:16.868204] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.611 [2024-07-11 14:03:16.868246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.611 [2024-07-11 14:03:16.868341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:14.611 [2024-07-11 14:03:16.868518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:14.611 [2024-07-11 14:03:16.868520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.611 14:03:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:14.611 14:03:16 -- common/autotest_common.sh@852 -- # return 0 00:33:14.611 14:03:16 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:14.611 14:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.611 14:03:16 -- common/autotest_common.sh@10 -- # set +x 00:33:14.611 INFO: Log level set to 20 00:33:14.611 INFO: Requests: 00:33:14.611 { 00:33:14.611 "jsonrpc": "2.0", 00:33:14.611 "method": "nvmf_set_config", 00:33:14.611 "id": 1, 00:33:14.611 "params": { 00:33:14.611 "admin_cmd_passthru": { 00:33:14.611 "identify_ctrlr": true 00:33:14.611 } 00:33:14.611 } 00:33:14.611 } 00:33:14.611 00:33:14.611 INFO: response: 00:33:14.611 { 00:33:14.611 "jsonrpc": "2.0", 00:33:14.611 "id": 1, 00:33:14.611 "result": true 00:33:14.611 } 00:33:14.611 00:33:14.611 14:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.611 14:03:16 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:14.611 14:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.611 14:03:16 -- common/autotest_common.sh@10 -- # set +x 00:33:14.611 INFO: Setting log level to 20 00:33:14.611 INFO: Setting log level to 20 00:33:14.611 INFO: Log level set to 20 00:33:14.611 INFO: Log level set to 20 00:33:14.611 INFO: Requests: 00:33:14.611 { 00:33:14.611 "jsonrpc": "2.0", 00:33:14.611 "method": "framework_start_init", 00:33:14.611 "id": 1 00:33:14.611 } 00:33:14.611 00:33:14.611 INFO: Requests: 00:33:14.611 { 00:33:14.611 "jsonrpc": "2.0", 00:33:14.611 "method": "framework_start_init", 00:33:14.611 "id": 1 00:33:14.611 } 00:33:14.611 00:33:14.611 [2024-07-11 14:03:16.985024] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:14.611 INFO: response: 00:33:14.611 { 00:33:14.611 "jsonrpc": "2.0", 00:33:14.611 "id": 1, 00:33:14.611 "result": true 00:33:14.611 } 00:33:14.611 00:33:14.611 INFO: response: 00:33:14.611 { 00:33:14.611 "jsonrpc": "2.0", 00:33:14.611 "id": 1, 00:33:14.611 "result": true 00:33:14.611 } 00:33:14.611 00:33:14.611 14:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.611 14:03:16 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:14.611 14:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.611 14:03:16 -- common/autotest_common.sh@10 -- # set +x 00:33:14.611 INFO: Setting log level to 40 00:33:14.611 INFO: Setting log level to 40 00:33:14.611 INFO: Setting log level to 40 00:33:14.611 [2024-07-11 14:03:16.998344] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.612 14:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.612 14:03:17 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:14.612 14:03:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:14.612 14:03:17 -- common/autotest_common.sh@10 -- # set +x 00:33:14.612 14:03:17 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:14.612 14:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.612 14:03:17 -- common/autotest_common.sh@10 -- # set +x 00:33:17.903 Nvme0n1 00:33:17.903 14:03:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.903 14:03:19 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:17.903 14:03:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.903 14:03:19 -- common/autotest_common.sh@10 -- # set +x 00:33:17.903 14:03:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.903 14:03:19 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:17.903 14:03:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.903 14:03:19 -- common/autotest_common.sh@10 -- # set +x 00:33:17.903 14:03:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.903 14:03:19 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.903 14:03:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.903 14:03:19 -- common/autotest_common.sh@10 -- # set +x 00:33:17.903 [2024-07-11 14:03:19.887945] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.903 14:03:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.903 14:03:19 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:17.903 14:03:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.903 14:03:19 -- common/autotest_common.sh@10 -- # set +x 00:33:17.904 [2024-07-11 14:03:19.895747] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:17.904 [ 00:33:17.904 { 00:33:17.904 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:17.904 "subtype": "Discovery", 00:33:17.904 "listen_addresses": [], 00:33:17.904 "allow_any_host": true, 00:33:17.904 "hosts": [] 00:33:17.904 }, 00:33:17.904 { 00:33:17.904 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:17.904 "subtype": "NVMe", 00:33:17.904 "listen_addresses": [ 00:33:17.904 { 00:33:17.904 "transport": "TCP", 00:33:17.904 "trtype": "TCP", 00:33:17.904 "adrfam": "IPv4", 00:33:17.904 "traddr": "10.0.0.2", 00:33:17.904 "trsvcid": "4420" 00:33:17.904 } 00:33:17.904 ], 00:33:17.904 "allow_any_host": true, 00:33:17.904 "hosts": [], 00:33:17.904 "serial_number": "SPDK00000000000001", 00:33:17.904 "model_number": "SPDK bdev Controller", 00:33:17.904 "max_namespaces": 1, 00:33:17.904 "min_cntlid": 1, 00:33:17.904 "max_cntlid": 65519, 00:33:17.904 "namespaces": [ 00:33:17.904 { 00:33:17.904 "nsid": 1, 00:33:17.904 "bdev_name": "Nvme0n1", 00:33:17.904 "name": "Nvme0n1", 00:33:17.904 "nguid": "1E53159E941C405BA1ABF46C9318515C", 00:33:17.904 "uuid": "1e53159e-941c-405b-a1ab-f46c9318515c" 00:33:17.904 } 00:33:17.904 ] 00:33:17.904 } 00:33:17.904 ] 00:33:17.904 14:03:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.904 14:03:19 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:17.904 14:03:19 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:17.904 14:03:19 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:17.904 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.904 14:03:20 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:33:17.904 14:03:20 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:17.904 14:03:20 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:17.904 14:03:20 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:17.904 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.904 14:03:20 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:17.904 14:03:20 -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:33:17.904 14:03:20 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:17.904 14:03:20 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:17.904 14:03:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:17.904 14:03:20 -- common/autotest_common.sh@10 -- # set +x 00:33:17.904 14:03:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:17.904 14:03:20 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:17.904 14:03:20 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:17.904 14:03:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:17.904 14:03:20 -- nvmf/common.sh@116 -- # sync 00:33:17.904 14:03:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:17.904 14:03:20 -- nvmf/common.sh@119 -- # set +e 00:33:17.904 14:03:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:17.904 14:03:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:17.904 rmmod nvme_tcp 00:33:17.904 rmmod nvme_fabrics 00:33:17.904 rmmod nvme_keyring 00:33:17.904 14:03:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:17.904 14:03:20 -- nvmf/common.sh@123 -- # set -e 00:33:17.904 14:03:20 -- nvmf/common.sh@124 -- # return 0 00:33:17.904 14:03:20 -- nvmf/common.sh@477 -- # '[' -n 1799517 ']' 00:33:17.904 14:03:20 -- nvmf/common.sh@478 -- # killprocess 1799517 00:33:17.904 14:03:20 -- common/autotest_common.sh@926 -- # '[' -z 1799517 ']' 00:33:17.904 14:03:20 -- common/autotest_common.sh@930 -- # kill -0 1799517 00:33:17.904 14:03:20 -- common/autotest_common.sh@931 -- # uname 00:33:17.904 14:03:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:17.904 14:03:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1799517 00:33:17.904 14:03:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:17.904 14:03:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:17.904 14:03:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1799517' 00:33:17.904 killing process with pid 1799517 00:33:17.904 14:03:20 -- common/autotest_common.sh@945 -- # kill 1799517 00:33:17.904 [2024-07-11 14:03:20.230008] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:17.904 14:03:20 -- common/autotest_common.sh@950 -- # wait 1799517 00:33:19.313 14:03:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:19.313 14:03:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:19.313 14:03:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:19.313 14:03:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:19.313 14:03:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:19.313 14:03:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.313 14:03:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:19.313 14:03:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.852 14:03:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:21.852 00:33:21.852 real 0m20.685s 00:33:21.852 user 0m26.356s 00:33:21.852 sys 0m4.619s 00:33:21.852 14:03:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:21.852 14:03:23 -- common/autotest_common.sh@10 -- # set +x 00:33:21.852 ************************************ 00:33:21.852 END TEST nvmf_identify_passthru 00:33:21.852 ************************************ 00:33:21.852 14:03:23 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:21.852 14:03:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:21.852 14:03:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:21.852 14:03:23 -- common/autotest_common.sh@10 -- # set +x 00:33:21.852 ************************************ 00:33:21.852 START TEST nvmf_dif 00:33:21.852 ************************************ 00:33:21.852 14:03:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:21.852 * Looking for test storage... 00:33:21.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:21.852 14:03:23 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:21.852 14:03:23 -- nvmf/common.sh@7 -- # uname -s 00:33:21.852 14:03:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:21.852 14:03:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:21.852 14:03:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:21.852 14:03:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:21.852 14:03:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:21.852 14:03:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:21.852 14:03:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:21.852 14:03:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:21.852 14:03:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:21.852 14:03:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:21.852 14:03:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:21.852 14:03:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:21.852 14:03:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:21.852 14:03:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:21.852 14:03:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:21.852 14:03:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:21.852 14:03:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.852 14:03:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.852 14:03:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.852 14:03:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.852 14:03:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.852 14:03:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.852 14:03:23 -- paths/export.sh@5 -- # export PATH 00:33:21.852 14:03:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.852 14:03:23 -- nvmf/common.sh@46 -- # : 0 00:33:21.852 14:03:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:21.852 14:03:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:21.852 14:03:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:21.852 14:03:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:21.852 14:03:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:21.852 14:03:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:21.852 14:03:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:21.852 14:03:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:21.852 14:03:23 -- target/dif.sh@15 -- # NULL_META=16 00:33:21.852 14:03:23 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:21.852 14:03:23 -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:21.852 14:03:23 -- target/dif.sh@15 -- # NULL_DIF=1 00:33:21.852 14:03:23 -- target/dif.sh@135 -- # nvmftestinit 00:33:21.852 14:03:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:21.852 14:03:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:21.852 14:03:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:21.852 14:03:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:21.852 14:03:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:21.852 14:03:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.852 14:03:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:21.852 14:03:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.853 14:03:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:21.853 14:03:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:21.853 14:03:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:21.853 14:03:23 -- common/autotest_common.sh@10 -- # set +x 00:33:27.127 14:03:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:27.127 14:03:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:27.127 14:03:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:27.127 14:03:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:27.127 14:03:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:27.127 14:03:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:27.127 14:03:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:27.127 14:03:28 -- nvmf/common.sh@294 -- # net_devs=() 00:33:27.127 14:03:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:27.127 14:03:28 -- nvmf/common.sh@295 -- # e810=() 00:33:27.127 14:03:28 -- nvmf/common.sh@295 -- # local -ga e810 00:33:27.127 14:03:28 -- nvmf/common.sh@296 -- # x722=() 00:33:27.127 14:03:28 -- nvmf/common.sh@296 -- # local -ga x722 00:33:27.127 14:03:28 -- nvmf/common.sh@297 -- # mlx=() 00:33:27.127 14:03:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:27.127 14:03:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:27.127 14:03:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:27.127 14:03:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:27.127 14:03:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:27.127 14:03:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:27.127 14:03:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:27.127 14:03:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:27.127 14:03:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:27.127 14:03:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:27.127 14:03:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:27.127 14:03:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:27.127 14:03:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:27.127 14:03:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:27.127 14:03:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:27.127 14:03:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:27.127 14:03:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:27.127 14:03:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:27.127 14:03:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:27.127 14:03:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:27.127 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:27.127 14:03:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:27.128 14:03:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:27.128 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:27.128 14:03:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:27.128 14:03:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:27.128 14:03:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.128 14:03:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:27.128 14:03:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.128 14:03:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:27.128 Found net devices under 0000:86:00.0: cvl_0_0 00:33:27.128 14:03:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.128 14:03:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:27.128 14:03:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:27.128 14:03:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:27.128 14:03:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:27.128 14:03:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:27.128 Found net devices under 0000:86:00.1: cvl_0_1 00:33:27.128 14:03:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:27.128 14:03:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:27.128 14:03:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:27.128 14:03:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:27.128 14:03:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:27.128 14:03:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:27.128 14:03:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:27.128 14:03:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:27.128 14:03:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:27.128 14:03:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:27.128 14:03:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:27.128 14:03:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:27.128 14:03:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:27.128 14:03:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:27.128 14:03:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:27.128 14:03:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:27.128 14:03:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:27.128 14:03:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:27.128 14:03:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:27.128 14:03:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:27.128 14:03:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:27.128 14:03:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:27.128 14:03:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:27.128 14:03:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:27.128 14:03:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:27.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:27.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:33:27.128 00:33:27.128 --- 10.0.0.2 ping statistics --- 00:33:27.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.128 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:33:27.128 14:03:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:27.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:27.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:33:27.128 00:33:27.128 --- 10.0.0.1 ping statistics --- 00:33:27.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.128 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:33:27.128 14:03:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:27.128 14:03:28 -- nvmf/common.sh@410 -- # return 0 00:33:27.128 14:03:28 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:27.128 14:03:28 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:29.032 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:29.032 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:29.032 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:29.290 14:03:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:29.290 14:03:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:29.290 14:03:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:29.290 14:03:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:29.290 14:03:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:29.290 14:03:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:29.290 14:03:31 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:29.290 14:03:31 -- target/dif.sh@137 -- # nvmfappstart 00:33:29.290 14:03:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:29.290 14:03:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:29.290 14:03:31 -- common/autotest_common.sh@10 -- # set +x 00:33:29.290 14:03:31 -- nvmf/common.sh@469 -- # nvmfpid=1804800 00:33:29.290 14:03:31 -- nvmf/common.sh@470 -- # waitforlisten 1804800 00:33:29.290 14:03:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:29.290 14:03:31 -- common/autotest_common.sh@819 -- # '[' -z 1804800 ']' 00:33:29.290 14:03:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.290 14:03:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:29.290 14:03:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.290 14:03:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:29.290 14:03:31 -- common/autotest_common.sh@10 -- # set +x 00:33:29.290 [2024-07-11 14:03:31.639699] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:33:29.290 [2024-07-11 14:03:31.639744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:29.290 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.290 [2024-07-11 14:03:31.698767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.290 [2024-07-11 14:03:31.738484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:29.290 [2024-07-11 14:03:31.738614] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:29.291 [2024-07-11 14:03:31.738622] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:29.291 [2024-07-11 14:03:31.738629] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:29.291 [2024-07-11 14:03:31.738646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.224 14:03:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:30.224 14:03:32 -- common/autotest_common.sh@852 -- # return 0 00:33:30.224 14:03:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:30.224 14:03:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:30.224 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:33:30.224 14:03:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.224 14:03:32 -- target/dif.sh@139 -- # create_transport 00:33:30.224 14:03:32 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:30.224 14:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:30.224 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:33:30.224 [2024-07-11 14:03:32.472325] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.224 14:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:30.224 14:03:32 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:30.224 14:03:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:30.224 14:03:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:30.224 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:33:30.224 ************************************ 00:33:30.224 START TEST fio_dif_1_default 00:33:30.224 ************************************ 00:33:30.224 14:03:32 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:33:30.224 14:03:32 -- target/dif.sh@86 -- # create_subsystems 0 00:33:30.224 14:03:32 -- target/dif.sh@28 -- # local sub 00:33:30.224 14:03:32 -- target/dif.sh@30 -- # for sub in "$@" 00:33:30.224 14:03:32 -- target/dif.sh@31 -- # create_subsystem 0 00:33:30.224 14:03:32 -- target/dif.sh@18 -- # local sub_id=0 00:33:30.224 14:03:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:30.224 14:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:30.224 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:33:30.224 bdev_null0 00:33:30.224 14:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:30.224 14:03:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:30.224 14:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:30.224 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:33:30.224 14:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:30.224 14:03:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:30.224 14:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:30.224 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:33:30.224 14:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:30.224 14:03:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:30.224 14:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:30.224 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:33:30.224 [2024-07-11 14:03:32.508551] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.224 14:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:30.224 14:03:32 -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:30.224 14:03:32 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:30.224 14:03:32 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:30.224 14:03:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:30.224 14:03:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:30.224 14:03:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:30.224 14:03:32 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:30.224 14:03:32 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:30.224 14:03:32 -- common/autotest_common.sh@1320 -- # shift 00:33:30.224 14:03:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:30.224 14:03:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:30.224 14:03:32 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:30.224 14:03:32 -- target/dif.sh@82 -- # gen_fio_conf 00:33:30.224 14:03:32 -- nvmf/common.sh@520 -- # config=() 00:33:30.224 14:03:32 -- target/dif.sh@54 -- # local file 00:33:30.224 14:03:32 -- nvmf/common.sh@520 -- # local subsystem config 00:33:30.224 14:03:32 -- target/dif.sh@56 -- # cat 00:33:30.224 14:03:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:30.224 14:03:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:30.224 { 00:33:30.224 "params": { 00:33:30.224 "name": "Nvme$subsystem", 00:33:30.224 "trtype": "$TEST_TRANSPORT", 00:33:30.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:30.224 "adrfam": "ipv4", 00:33:30.224 "trsvcid": "$NVMF_PORT", 00:33:30.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:30.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:30.224 "hdgst": ${hdgst:-false}, 00:33:30.224 "ddgst": ${ddgst:-false} 00:33:30.224 }, 00:33:30.224 "method": "bdev_nvme_attach_controller" 00:33:30.224 } 00:33:30.224 EOF 00:33:30.224 )") 00:33:30.224 14:03:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:30.224 14:03:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:30.224 14:03:32 -- nvmf/common.sh@542 -- # cat 00:33:30.224 14:03:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:30.224 14:03:32 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:30.224 14:03:32 -- target/dif.sh@72 -- # (( file <= files )) 00:33:30.224 14:03:32 -- nvmf/common.sh@544 -- # jq . 00:33:30.224 14:03:32 -- nvmf/common.sh@545 -- # IFS=, 00:33:30.224 14:03:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:30.224 "params": { 00:33:30.224 "name": "Nvme0", 00:33:30.224 "trtype": "tcp", 00:33:30.224 "traddr": "10.0.0.2", 00:33:30.224 "adrfam": "ipv4", 00:33:30.224 "trsvcid": "4420", 00:33:30.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:30.224 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:30.224 "hdgst": false, 00:33:30.224 "ddgst": false 00:33:30.224 }, 00:33:30.224 "method": "bdev_nvme_attach_controller" 00:33:30.224 }' 00:33:30.224 14:03:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:30.224 14:03:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:30.224 14:03:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:30.224 14:03:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:30.224 14:03:32 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:30.224 14:03:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:30.224 14:03:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:30.224 14:03:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:30.224 14:03:32 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:30.224 14:03:32 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:30.481 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:30.481 fio-3.35 00:33:30.481 Starting 1 thread 00:33:30.481 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.049 [2024-07-11 14:03:33.254993] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:31.049 [2024-07-11 14:03:33.255036] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:41.026 00:33:41.026 filename0: (groupid=0, jobs=1): err= 0: pid=1805192: Thu Jul 11 14:03:43 2024 00:33:41.026 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10005msec) 00:33:41.026 slat (nsec): min=5864, max=25633, avg=6214.49, stdev=1267.45 00:33:41.026 clat (usec): min=596, max=44970, avg=21090.41, stdev=20295.37 00:33:41.026 lat (usec): min=602, max=44996, avg=21096.62, stdev=20295.40 00:33:41.026 clat percentiles (usec): 00:33:41.026 | 1.00th=[ 611], 5.00th=[ 619], 10.00th=[ 619], 20.00th=[ 644], 00:33:41.026 | 30.00th=[ 775], 40.00th=[ 791], 50.00th=[41157], 60.00th=[41157], 00:33:41.026 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:41.026 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:33:41.026 | 99.99th=[44827] 00:33:41.026 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:33:41.026 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:33:41.026 lat (usec) : 750=27.37%, 1000=22.42% 00:33:41.026 lat (msec) : 50=50.21% 00:33:41.026 cpu : usr=95.59%, sys=4.16%, ctx=9, majf=0, minf=233 00:33:41.026 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:41.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:41.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:41.026 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:41.026 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:41.026 00:33:41.026 Run status group 0 (all jobs): 00:33:41.026 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10005-10005msec 00:33:41.286 14:03:43 -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:41.286 14:03:43 -- target/dif.sh@43 -- # local sub 00:33:41.286 14:03:43 -- target/dif.sh@45 -- # for sub in "$@" 00:33:41.286 14:03:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:41.286 14:03:43 -- target/dif.sh@36 -- # local sub_id=0 00:33:41.286 14:03:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:41.286 14:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.286 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:41.286 14:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.286 14:03:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:41.286 14:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.286 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:41.286 14:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.286 00:33:41.286 real 0m11.062s 00:33:41.286 user 0m15.734s 00:33:41.286 sys 0m0.670s 00:33:41.286 14:03:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:41.286 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:41.286 ************************************ 00:33:41.286 END TEST fio_dif_1_default 00:33:41.286 ************************************ 00:33:41.286 14:03:43 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:41.286 14:03:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:41.286 14:03:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:41.286 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:41.286 ************************************ 00:33:41.286 START TEST fio_dif_1_multi_subsystems 00:33:41.286 ************************************ 00:33:41.286 14:03:43 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:33:41.286 14:03:43 -- target/dif.sh@92 -- # local files=1 00:33:41.286 14:03:43 -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:41.286 14:03:43 -- target/dif.sh@28 -- # local sub 00:33:41.286 14:03:43 -- target/dif.sh@30 -- # for sub in "$@" 00:33:41.286 14:03:43 -- target/dif.sh@31 -- # create_subsystem 0 00:33:41.286 14:03:43 -- target/dif.sh@18 -- # local sub_id=0 00:33:41.286 14:03:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:41.286 14:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.286 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:41.286 bdev_null0 00:33:41.286 14:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.286 14:03:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:41.286 14:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.286 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:41.286 14:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.286 14:03:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:41.286 14:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.286 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:41.286 14:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.286 14:03:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:41.286 14:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.286 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:41.286 [2024-07-11 14:03:43.611729] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:41.287 14:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.287 14:03:43 -- target/dif.sh@30 -- # for sub in "$@" 00:33:41.287 14:03:43 -- target/dif.sh@31 -- # create_subsystem 1 00:33:41.287 14:03:43 -- target/dif.sh@18 -- # local sub_id=1 00:33:41.287 14:03:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:41.287 14:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.287 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:41.287 bdev_null1 00:33:41.287 14:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.287 14:03:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:41.287 14:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.287 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:41.287 14:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.287 14:03:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:41.287 14:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.287 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:41.287 14:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.287 14:03:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:41.287 14:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.287 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:33:41.287 14:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.287 14:03:43 -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:41.287 14:03:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:41.287 14:03:43 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:41.287 14:03:43 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:41.287 14:03:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:41.287 14:03:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:41.287 14:03:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:41.287 14:03:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:41.287 14:03:43 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:41.287 14:03:43 -- common/autotest_common.sh@1320 -- # shift 00:33:41.287 14:03:43 -- target/dif.sh@82 -- # gen_fio_conf 00:33:41.287 14:03:43 -- nvmf/common.sh@520 -- # config=() 00:33:41.287 14:03:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:41.287 14:03:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:41.287 14:03:43 -- nvmf/common.sh@520 -- # local subsystem config 00:33:41.287 14:03:43 -- target/dif.sh@54 -- # local file 00:33:41.287 14:03:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:41.287 14:03:43 -- target/dif.sh@56 -- # cat 00:33:41.287 14:03:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:41.287 { 00:33:41.287 "params": { 00:33:41.287 "name": "Nvme$subsystem", 00:33:41.287 "trtype": "$TEST_TRANSPORT", 00:33:41.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:41.287 "adrfam": "ipv4", 00:33:41.287 "trsvcid": "$NVMF_PORT", 00:33:41.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:41.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:41.287 "hdgst": ${hdgst:-false}, 00:33:41.287 "ddgst": ${ddgst:-false} 00:33:41.287 }, 00:33:41.287 "method": "bdev_nvme_attach_controller" 00:33:41.287 } 00:33:41.287 EOF 00:33:41.287 )") 00:33:41.287 14:03:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:41.287 14:03:43 -- nvmf/common.sh@542 -- # cat 00:33:41.287 14:03:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:41.287 14:03:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:41.287 14:03:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:41.287 14:03:43 -- target/dif.sh@72 -- # (( file <= files )) 00:33:41.287 14:03:43 -- target/dif.sh@73 -- # cat 00:33:41.287 14:03:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:41.287 14:03:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:41.287 { 00:33:41.287 "params": { 00:33:41.287 "name": "Nvme$subsystem", 00:33:41.287 "trtype": "$TEST_TRANSPORT", 00:33:41.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:41.287 "adrfam": "ipv4", 00:33:41.287 "trsvcid": "$NVMF_PORT", 00:33:41.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:41.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:41.287 "hdgst": ${hdgst:-false}, 00:33:41.287 "ddgst": ${ddgst:-false} 00:33:41.287 }, 00:33:41.287 "method": "bdev_nvme_attach_controller" 00:33:41.287 } 00:33:41.287 EOF 00:33:41.287 )") 00:33:41.287 14:03:43 -- target/dif.sh@72 -- # (( file++ )) 00:33:41.287 14:03:43 -- target/dif.sh@72 -- # (( file <= files )) 00:33:41.287 14:03:43 -- nvmf/common.sh@542 -- # cat 00:33:41.287 14:03:43 -- nvmf/common.sh@544 -- # jq . 00:33:41.287 14:03:43 -- nvmf/common.sh@545 -- # IFS=, 00:33:41.287 14:03:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:41.287 "params": { 00:33:41.287 "name": "Nvme0", 00:33:41.287 "trtype": "tcp", 00:33:41.287 "traddr": "10.0.0.2", 00:33:41.287 "adrfam": "ipv4", 00:33:41.287 "trsvcid": "4420", 00:33:41.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:41.288 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:41.288 "hdgst": false, 00:33:41.288 "ddgst": false 00:33:41.288 }, 00:33:41.288 "method": "bdev_nvme_attach_controller" 00:33:41.288 },{ 00:33:41.288 "params": { 00:33:41.288 "name": "Nvme1", 00:33:41.288 "trtype": "tcp", 00:33:41.288 "traddr": "10.0.0.2", 00:33:41.288 "adrfam": "ipv4", 00:33:41.288 "trsvcid": "4420", 00:33:41.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:41.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:41.288 "hdgst": false, 00:33:41.288 "ddgst": false 00:33:41.288 }, 00:33:41.288 "method": "bdev_nvme_attach_controller" 00:33:41.288 }' 00:33:41.288 14:03:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:41.288 14:03:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:41.288 14:03:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:41.288 14:03:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:41.288 14:03:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:41.288 14:03:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:41.288 14:03:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:41.288 14:03:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:41.288 14:03:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:41.288 14:03:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:41.547 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:41.547 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:41.547 fio-3.35 00:33:41.547 Starting 2 threads 00:33:41.547 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.483 [2024-07-11 14:03:44.608777] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:42.483 [2024-07-11 14:03:44.608819] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:52.519 00:33:52.520 filename0: (groupid=0, jobs=1): err= 0: pid=1807179: Thu Jul 11 14:03:54 2024 00:33:52.520 read: IOPS=189, BW=757KiB/s (775kB/s)(7600KiB/10040msec) 00:33:52.520 slat (nsec): min=5902, max=53941, avg=10090.59, stdev=6915.83 00:33:52.520 clat (usec): min=595, max=43094, avg=21103.81, stdev=20416.00 00:33:52.520 lat (usec): min=602, max=43126, avg=21113.90, stdev=20413.88 00:33:52.520 clat percentiles (usec): 00:33:52.520 | 1.00th=[ 611], 5.00th=[ 619], 10.00th=[ 627], 20.00th=[ 635], 00:33:52.520 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[41157], 60.00th=[41157], 00:33:52.520 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:33:52.520 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:33:52.520 | 99.99th=[43254] 00:33:52.520 bw ( KiB/s): min= 704, max= 769, per=50.12%, avg=758.60, stdev=23.39, samples=20 00:33:52.520 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:33:52.520 lat (usec) : 750=48.11%, 1000=1.58% 00:33:52.520 lat (msec) : 2=0.21%, 50=50.11% 00:33:52.520 cpu : usr=99.19%, sys=0.50%, ctx=18, majf=0, minf=301 00:33:52.520 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.520 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.520 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:52.520 filename1: (groupid=0, jobs=1): err= 0: pid=1807180: Thu Jul 11 14:03:54 2024 00:33:52.520 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:33:52.520 slat (nsec): min=5949, max=39369, avg=9255.36, stdev=6090.01 00:33:52.520 clat (usec): min=443, max=42899, avg=21069.80, stdev=20444.22 00:33:52.520 lat (usec): min=450, max=42928, avg=21079.05, stdev=20442.28 00:33:52.520 clat percentiles (usec): 00:33:52.520 | 1.00th=[ 603], 5.00th=[ 611], 10.00th=[ 611], 20.00th=[ 619], 00:33:52.520 | 30.00th=[ 627], 40.00th=[ 635], 50.00th=[ 1139], 60.00th=[41157], 00:33:52.520 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:33:52.520 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:52.520 | 99.99th=[42730] 00:33:52.520 bw ( KiB/s): min= 704, max= 769, per=50.12%, avg=758.05, stdev=24.05, samples=19 00:33:52.520 iops : min= 176, max= 192, avg=189.47, stdev= 5.99, samples=19 00:33:52.520 lat (usec) : 500=0.21%, 750=48.31%, 1000=1.05% 00:33:52.520 lat (msec) : 2=0.42%, 50=50.00% 00:33:52.520 cpu : usr=98.58%, sys=1.14%, ctx=11, majf=0, minf=76 00:33:52.520 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:52.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.520 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.520 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:52.520 00:33:52.520 Run status group 0 (all jobs): 00:33:52.520 READ: bw=1512KiB/s (1549kB/s), 757KiB/s-758KiB/s (775kB/s-777kB/s), io=14.8MiB (15.5MB), run=10001-10040msec 00:33:52.520 14:03:54 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:52.520 14:03:54 -- target/dif.sh@43 -- # local sub 00:33:52.520 14:03:54 -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.520 14:03:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:52.520 14:03:54 -- target/dif.sh@36 -- # local sub_id=0 00:33:52.520 14:03:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:52.520 14:03:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.520 14:03:54 -- common/autotest_common.sh@10 -- # set +x 00:33:52.520 14:03:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.520 14:03:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:52.520 14:03:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.520 14:03:54 -- common/autotest_common.sh@10 -- # set +x 00:33:52.520 14:03:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.520 14:03:54 -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.520 14:03:54 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:52.520 14:03:54 -- target/dif.sh@36 -- # local sub_id=1 00:33:52.520 14:03:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:52.520 14:03:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.520 14:03:54 -- common/autotest_common.sh@10 -- # set +x 00:33:52.520 14:03:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.520 14:03:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:52.520 14:03:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.520 14:03:54 -- common/autotest_common.sh@10 -- # set +x 00:33:52.520 14:03:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.520 00:33:52.520 real 0m11.372s 00:33:52.520 user 0m26.463s 00:33:52.520 sys 0m0.440s 00:33:52.520 14:03:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.520 14:03:54 -- common/autotest_common.sh@10 -- # set +x 00:33:52.520 ************************************ 00:33:52.520 END TEST fio_dif_1_multi_subsystems 00:33:52.520 ************************************ 00:33:52.780 14:03:54 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:52.780 14:03:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:52.780 14:03:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:52.780 14:03:54 -- common/autotest_common.sh@10 -- # set +x 00:33:52.780 ************************************ 00:33:52.780 START TEST fio_dif_rand_params 00:33:52.780 ************************************ 00:33:52.780 14:03:54 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:33:52.780 14:03:54 -- target/dif.sh@100 -- # local NULL_DIF 00:33:52.780 14:03:54 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:52.780 14:03:54 -- target/dif.sh@103 -- # NULL_DIF=3 00:33:52.780 14:03:54 -- target/dif.sh@103 -- # bs=128k 00:33:52.780 14:03:54 -- target/dif.sh@103 -- # numjobs=3 00:33:52.780 14:03:54 -- target/dif.sh@103 -- # iodepth=3 00:33:52.780 14:03:54 -- target/dif.sh@103 -- # runtime=5 00:33:52.780 14:03:54 -- target/dif.sh@105 -- # create_subsystems 0 00:33:52.780 14:03:54 -- target/dif.sh@28 -- # local sub 00:33:52.780 14:03:54 -- target/dif.sh@30 -- # for sub in "$@" 00:33:52.780 14:03:54 -- target/dif.sh@31 -- # create_subsystem 0 00:33:52.780 14:03:54 -- target/dif.sh@18 -- # local sub_id=0 00:33:52.780 14:03:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:52.780 14:03:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.780 14:03:54 -- common/autotest_common.sh@10 -- # set +x 00:33:52.780 bdev_null0 00:33:52.780 14:03:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.780 14:03:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:52.780 14:03:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.780 14:03:55 -- common/autotest_common.sh@10 -- # set +x 00:33:52.780 14:03:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.780 14:03:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:52.780 14:03:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.780 14:03:55 -- common/autotest_common.sh@10 -- # set +x 00:33:52.780 14:03:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.780 14:03:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:52.780 14:03:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:52.780 14:03:55 -- common/autotest_common.sh@10 -- # set +x 00:33:52.780 [2024-07-11 14:03:55.030221] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.780 14:03:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:52.780 14:03:55 -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:52.780 14:03:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.780 14:03:55 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.780 14:03:55 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:52.780 14:03:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:52.780 14:03:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:52.780 14:03:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:52.780 14:03:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:52.780 14:03:55 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.780 14:03:55 -- common/autotest_common.sh@1320 -- # shift 00:33:52.780 14:03:55 -- target/dif.sh@82 -- # gen_fio_conf 00:33:52.780 14:03:55 -- nvmf/common.sh@520 -- # config=() 00:33:52.780 14:03:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:52.780 14:03:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.780 14:03:55 -- nvmf/common.sh@520 -- # local subsystem config 00:33:52.780 14:03:55 -- target/dif.sh@54 -- # local file 00:33:52.780 14:03:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:52.780 14:03:55 -- target/dif.sh@56 -- # cat 00:33:52.780 14:03:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:52.780 { 00:33:52.780 "params": { 00:33:52.780 "name": "Nvme$subsystem", 00:33:52.780 "trtype": "$TEST_TRANSPORT", 00:33:52.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.780 "adrfam": "ipv4", 00:33:52.780 "trsvcid": "$NVMF_PORT", 00:33:52.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.780 "hdgst": ${hdgst:-false}, 00:33:52.780 "ddgst": ${ddgst:-false} 00:33:52.780 }, 00:33:52.780 "method": "bdev_nvme_attach_controller" 00:33:52.780 } 00:33:52.780 EOF 00:33:52.780 )") 00:33:52.780 14:03:55 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.780 14:03:55 -- nvmf/common.sh@542 -- # cat 00:33:52.780 14:03:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:52.780 14:03:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:52.780 14:03:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:52.780 14:03:55 -- target/dif.sh@72 -- # (( file <= files )) 00:33:52.780 14:03:55 -- nvmf/common.sh@544 -- # jq . 00:33:52.780 14:03:55 -- nvmf/common.sh@545 -- # IFS=, 00:33:52.780 14:03:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:52.780 "params": { 00:33:52.780 "name": "Nvme0", 00:33:52.780 "trtype": "tcp", 00:33:52.780 "traddr": "10.0.0.2", 00:33:52.780 "adrfam": "ipv4", 00:33:52.780 "trsvcid": "4420", 00:33:52.780 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:52.780 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:52.780 "hdgst": false, 00:33:52.780 "ddgst": false 00:33:52.780 }, 00:33:52.780 "method": "bdev_nvme_attach_controller" 00:33:52.780 }' 00:33:52.780 14:03:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:52.780 14:03:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:52.780 14:03:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.780 14:03:55 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.780 14:03:55 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:52.780 14:03:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:52.780 14:03:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:52.780 14:03:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:52.780 14:03:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:52.780 14:03:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.040 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:53.040 ... 00:33:53.040 fio-3.35 00:33:53.040 Starting 3 threads 00:33:53.040 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.607 [2024-07-11 14:03:55.771735] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:53.607 [2024-07-11 14:03:55.771777] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:58.884 00:33:58.884 filename0: (groupid=0, jobs=1): err= 0: pid=1809173: Thu Jul 11 14:04:00 2024 00:33:58.884 read: IOPS=270, BW=33.8MiB/s (35.5MB/s)(169MiB/5004msec) 00:33:58.884 slat (nsec): min=6192, max=53801, avg=11632.82, stdev=5345.64 00:33:58.884 clat (usec): min=3941, max=89671, avg=11058.60, stdev=12946.51 00:33:58.884 lat (usec): min=3949, max=89690, avg=11070.23, stdev=12946.85 00:33:58.884 clat percentiles (usec): 00:33:58.884 | 1.00th=[ 4178], 5.00th=[ 4555], 10.00th=[ 4752], 20.00th=[ 5473], 00:33:58.884 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6718], 60.00th=[ 7111], 00:33:58.884 | 70.00th=[ 7767], 80.00th=[ 8979], 90.00th=[46400], 95.00th=[48497], 00:33:58.884 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51643], 99.95th=[89654], 00:33:58.884 | 99.99th=[89654] 00:33:58.884 bw ( KiB/s): min=19200, max=47616, per=34.54%, avg=33450.67, stdev=9906.57, samples=9 00:33:58.884 iops : min= 150, max= 372, avg=261.33, stdev=77.40, samples=9 00:33:58.884 lat (msec) : 4=0.15%, 10=86.27%, 20=3.25%, 50=8.71%, 100=1.62% 00:33:58.884 cpu : usr=96.92%, sys=2.72%, ctx=12, majf=0, minf=177 00:33:58.884 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.884 issued rwts: total=1355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.884 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:58.884 filename0: (groupid=0, jobs=1): err= 0: pid=1809174: Thu Jul 11 14:04:00 2024 00:33:58.884 read: IOPS=274, BW=34.3MiB/s (36.0MB/s)(173MiB/5047msec) 00:33:58.884 slat (nsec): min=6145, max=38983, avg=11633.17, stdev=6404.96 00:33:58.884 clat (usec): min=3754, max=52425, avg=10883.84, stdev=13015.34 00:33:58.884 lat (usec): min=3763, max=52438, avg=10895.47, stdev=13015.71 00:33:58.884 clat percentiles (usec): 00:33:58.884 | 1.00th=[ 3949], 5.00th=[ 4359], 10.00th=[ 4555], 20.00th=[ 4948], 00:33:58.884 | 30.00th=[ 5669], 40.00th=[ 6128], 50.00th=[ 6521], 60.00th=[ 6849], 00:33:58.884 | 70.00th=[ 7635], 80.00th=[ 8586], 90.00th=[46924], 95.00th=[48497], 00:33:58.884 | 99.00th=[50070], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:33:58.884 | 99.99th=[52167] 00:33:58.884 bw ( KiB/s): min=16896, max=52224, per=36.54%, avg=35395.40, stdev=10702.92, samples=10 00:33:58.884 iops : min= 132, max= 408, avg=276.50, stdev=83.59, samples=10 00:33:58.884 lat (msec) : 4=1.08%, 10=86.21%, 20=2.17%, 50=9.39%, 100=1.16% 00:33:58.884 cpu : usr=97.07%, sys=2.58%, ctx=8, majf=0, minf=39 00:33:58.884 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.884 issued rwts: total=1385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.884 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:58.884 filename0: (groupid=0, jobs=1): err= 0: pid=1809175: Thu Jul 11 14:04:00 2024 00:33:58.884 read: IOPS=215, BW=26.9MiB/s (28.3MB/s)(135MiB/5005msec) 00:33:58.884 slat (nsec): min=6201, max=40429, avg=15961.90, stdev=8770.27 00:33:58.884 clat (usec): min=4946, max=53721, avg=13890.29, stdev=14698.69 00:33:58.884 lat (usec): min=4953, max=53751, avg=13906.25, stdev=14699.01 00:33:58.884 clat percentiles (usec): 00:33:58.884 | 1.00th=[ 5211], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6521], 00:33:58.884 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 7832], 60.00th=[ 8455], 00:33:58.884 | 70.00th=[ 9372], 80.00th=[10814], 90.00th=[47973], 95.00th=[49546], 00:33:58.884 | 99.00th=[51643], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:33:58.884 | 99.99th=[53740] 00:33:58.884 bw ( KiB/s): min=19968, max=35584, per=28.47%, avg=27571.20, stdev=4950.88, samples=10 00:33:58.884 iops : min= 156, max= 278, avg=215.40, stdev=38.68, samples=10 00:33:58.884 lat (msec) : 10=73.86%, 20=11.40%, 50=11.49%, 100=3.24% 00:33:58.884 cpu : usr=96.50%, sys=3.18%, ctx=13, majf=0, minf=129 00:33:58.884 IO depths : 1=3.6%, 2=96.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.884 issued rwts: total=1079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.884 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:58.884 00:33:58.884 Run status group 0 (all jobs): 00:33:58.884 READ: bw=94.6MiB/s (99.2MB/s), 26.9MiB/s-34.3MiB/s (28.3MB/s-36.0MB/s), io=477MiB (501MB), run=5004-5047msec 00:33:58.884 14:04:01 -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:58.884 14:04:01 -- target/dif.sh@43 -- # local sub 00:33:58.884 14:04:01 -- target/dif.sh@45 -- # for sub in "$@" 00:33:58.884 14:04:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:58.884 14:04:01 -- target/dif.sh@36 -- # local sub_id=0 00:33:58.884 14:04:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:58.884 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.884 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@109 -- # NULL_DIF=2 00:33:58.885 14:04:01 -- target/dif.sh@109 -- # bs=4k 00:33:58.885 14:04:01 -- target/dif.sh@109 -- # numjobs=8 00:33:58.885 14:04:01 -- target/dif.sh@109 -- # iodepth=16 00:33:58.885 14:04:01 -- target/dif.sh@109 -- # runtime= 00:33:58.885 14:04:01 -- target/dif.sh@109 -- # files=2 00:33:58.885 14:04:01 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:58.885 14:04:01 -- target/dif.sh@28 -- # local sub 00:33:58.885 14:04:01 -- target/dif.sh@30 -- # for sub in "$@" 00:33:58.885 14:04:01 -- target/dif.sh@31 -- # create_subsystem 0 00:33:58.885 14:04:01 -- target/dif.sh@18 -- # local sub_id=0 00:33:58.885 14:04:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 bdev_null0 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 [2024-07-11 14:04:01.140039] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@30 -- # for sub in "$@" 00:33:58.885 14:04:01 -- target/dif.sh@31 -- # create_subsystem 1 00:33:58.885 14:04:01 -- target/dif.sh@18 -- # local sub_id=1 00:33:58.885 14:04:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 bdev_null1 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@30 -- # for sub in "$@" 00:33:58.885 14:04:01 -- target/dif.sh@31 -- # create_subsystem 2 00:33:58.885 14:04:01 -- target/dif.sh@18 -- # local sub_id=2 00:33:58.885 14:04:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 bdev_null2 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:58.885 14:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:58.885 14:04:01 -- common/autotest_common.sh@10 -- # set +x 00:33:58.885 14:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:58.885 14:04:01 -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:58.885 14:04:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.885 14:04:01 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.885 14:04:01 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:58.885 14:04:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:58.885 14:04:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:58.885 14:04:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:58.885 14:04:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:58.885 14:04:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.885 14:04:01 -- common/autotest_common.sh@1320 -- # shift 00:33:58.885 14:04:01 -- target/dif.sh@82 -- # gen_fio_conf 00:33:58.885 14:04:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:58.885 14:04:01 -- nvmf/common.sh@520 -- # config=() 00:33:58.885 14:04:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:58.885 14:04:01 -- target/dif.sh@54 -- # local file 00:33:58.885 14:04:01 -- nvmf/common.sh@520 -- # local subsystem config 00:33:58.885 14:04:01 -- target/dif.sh@56 -- # cat 00:33:58.885 14:04:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:58.885 14:04:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:58.885 { 00:33:58.885 "params": { 00:33:58.885 "name": "Nvme$subsystem", 00:33:58.885 "trtype": "$TEST_TRANSPORT", 00:33:58.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:58.885 "adrfam": "ipv4", 00:33:58.885 "trsvcid": "$NVMF_PORT", 00:33:58.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:58.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:58.885 "hdgst": ${hdgst:-false}, 00:33:58.885 "ddgst": ${ddgst:-false} 00:33:58.885 }, 00:33:58.885 "method": "bdev_nvme_attach_controller" 00:33:58.885 } 00:33:58.885 EOF 00:33:58.885 )") 00:33:58.885 14:04:01 -- nvmf/common.sh@542 -- # cat 00:33:58.885 14:04:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.885 14:04:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:58.885 14:04:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:58.885 14:04:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:58.885 14:04:01 -- target/dif.sh@72 -- # (( file <= files )) 00:33:58.885 14:04:01 -- target/dif.sh@73 -- # cat 00:33:58.885 14:04:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:58.885 14:04:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:58.885 { 00:33:58.885 "params": { 00:33:58.885 "name": "Nvme$subsystem", 00:33:58.885 "trtype": "$TEST_TRANSPORT", 00:33:58.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:58.885 "adrfam": "ipv4", 00:33:58.885 "trsvcid": "$NVMF_PORT", 00:33:58.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:58.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:58.885 "hdgst": ${hdgst:-false}, 00:33:58.885 "ddgst": ${ddgst:-false} 00:33:58.885 }, 00:33:58.885 "method": "bdev_nvme_attach_controller" 00:33:58.885 } 00:33:58.885 EOF 00:33:58.885 )") 00:33:58.885 14:04:01 -- target/dif.sh@72 -- # (( file++ )) 00:33:58.885 14:04:01 -- nvmf/common.sh@542 -- # cat 00:33:58.885 14:04:01 -- target/dif.sh@72 -- # (( file <= files )) 00:33:58.885 14:04:01 -- target/dif.sh@73 -- # cat 00:33:58.885 14:04:01 -- target/dif.sh@72 -- # (( file++ )) 00:33:58.885 14:04:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:58.885 14:04:01 -- target/dif.sh@72 -- # (( file <= files )) 00:33:58.885 14:04:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:58.885 { 00:33:58.885 "params": { 00:33:58.885 "name": "Nvme$subsystem", 00:33:58.885 "trtype": "$TEST_TRANSPORT", 00:33:58.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:58.885 "adrfam": "ipv4", 00:33:58.885 "trsvcid": "$NVMF_PORT", 00:33:58.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:58.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:58.885 "hdgst": ${hdgst:-false}, 00:33:58.885 "ddgst": ${ddgst:-false} 00:33:58.885 }, 00:33:58.885 "method": "bdev_nvme_attach_controller" 00:33:58.885 } 00:33:58.885 EOF 00:33:58.885 )") 00:33:58.885 14:04:01 -- nvmf/common.sh@542 -- # cat 00:33:58.885 14:04:01 -- nvmf/common.sh@544 -- # jq . 00:33:58.885 14:04:01 -- nvmf/common.sh@545 -- # IFS=, 00:33:58.885 14:04:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:58.885 "params": { 00:33:58.885 "name": "Nvme0", 00:33:58.885 "trtype": "tcp", 00:33:58.885 "traddr": "10.0.0.2", 00:33:58.885 "adrfam": "ipv4", 00:33:58.885 "trsvcid": "4420", 00:33:58.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:58.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:58.885 "hdgst": false, 00:33:58.885 "ddgst": false 00:33:58.885 }, 00:33:58.885 "method": "bdev_nvme_attach_controller" 00:33:58.885 },{ 00:33:58.885 "params": { 00:33:58.885 "name": "Nvme1", 00:33:58.885 "trtype": "tcp", 00:33:58.885 "traddr": "10.0.0.2", 00:33:58.885 "adrfam": "ipv4", 00:33:58.885 "trsvcid": "4420", 00:33:58.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:58.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:58.885 "hdgst": false, 00:33:58.885 "ddgst": false 00:33:58.885 }, 00:33:58.885 "method": "bdev_nvme_attach_controller" 00:33:58.885 },{ 00:33:58.885 "params": { 00:33:58.885 "name": "Nvme2", 00:33:58.886 "trtype": "tcp", 00:33:58.886 "traddr": "10.0.0.2", 00:33:58.886 "adrfam": "ipv4", 00:33:58.886 "trsvcid": "4420", 00:33:58.886 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:58.886 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:58.886 "hdgst": false, 00:33:58.886 "ddgst": false 00:33:58.886 }, 00:33:58.886 "method": "bdev_nvme_attach_controller" 00:33:58.886 }' 00:33:58.886 14:04:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:58.886 14:04:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:58.886 14:04:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:58.886 14:04:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.886 14:04:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:58.886 14:04:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:58.886 14:04:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:58.886 14:04:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:58.886 14:04:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:58.886 14:04:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:59.144 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:59.144 ... 00:33:59.144 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:59.144 ... 00:33:59.144 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:59.144 ... 00:33:59.144 fio-3.35 00:33:59.144 Starting 24 threads 00:33:59.144 EAL: No free 2048 kB hugepages reported on node 1 00:34:00.076 [2024-07-11 14:04:02.224546] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:00.076 [2024-07-11 14:04:02.224583] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:10.042 00:34:10.042 filename0: (groupid=0, jobs=1): err= 0: pid=1810401: Thu Jul 11 14:04:12 2024 00:34:10.042 read: IOPS=650, BW=2603KiB/s (2666kB/s)(25.4MiB/10005msec) 00:34:10.042 slat (usec): min=6, max=103, avg=23.19, stdev=21.12 00:34:10.042 clat (usec): min=13089, max=44194, avg=24376.10, stdev=1332.76 00:34:10.042 lat (usec): min=13125, max=44211, avg=24399.29, stdev=1330.77 00:34:10.042 clat percentiles (usec): 00:34:10.042 | 1.00th=[22938], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:34:10.042 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:34:10.042 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25822], 95.00th=[26346], 00:34:10.042 | 99.00th=[26870], 99.50th=[27132], 99.90th=[43779], 99.95th=[43779], 00:34:10.042 | 99.99th=[44303] 00:34:10.042 bw ( KiB/s): min= 2432, max= 2688, per=4.17%, avg=2607.16, stdev=76.45, samples=19 00:34:10.042 iops : min= 608, max= 672, avg=651.79, stdev=19.11, samples=19 00:34:10.042 lat (msec) : 20=0.25%, 50=99.75% 00:34:10.042 cpu : usr=99.14%, sys=0.48%, ctx=14, majf=0, minf=43 00:34:10.042 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:10.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.042 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.042 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.042 filename0: (groupid=0, jobs=1): err= 0: pid=1810402: Thu Jul 11 14:04:12 2024 00:34:10.042 read: IOPS=652, BW=2612KiB/s (2674kB/s)(25.6MiB/10023msec) 00:34:10.042 slat (usec): min=5, max=124, avg=48.56, stdev=22.51 00:34:10.042 clat (usec): min=7123, max=37075, avg=24096.75, stdev=1743.63 00:34:10.042 lat (usec): min=7178, max=37184, avg=24145.31, stdev=1743.87 00:34:10.042 clat percentiles (usec): 00:34:10.042 | 1.00th=[21103], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:34:10.042 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.042 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[26084], 00:34:10.042 | 99.00th=[26608], 99.50th=[27132], 99.90th=[36963], 99.95th=[36963], 00:34:10.042 | 99.99th=[36963] 00:34:10.042 bw ( KiB/s): min= 2448, max= 2816, per=4.18%, avg=2613.58, stdev=86.42, samples=19 00:34:10.042 iops : min= 612, max= 704, avg=653.37, stdev=21.62, samples=19 00:34:10.042 lat (msec) : 10=0.73%, 20=0.06%, 50=99.21% 00:34:10.042 cpu : usr=99.11%, sys=0.48%, ctx=16, majf=0, minf=50 00:34:10.042 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:10.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.042 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.042 issued rwts: total=6544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.042 filename0: (groupid=0, jobs=1): err= 0: pid=1810403: Thu Jul 11 14:04:12 2024 00:34:10.042 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10021msec) 00:34:10.042 slat (nsec): min=4036, max=97360, avg=25853.25, stdev=20406.52 00:34:10.042 clat (usec): min=482, max=37838, avg=23782.06, stdev=3643.37 00:34:10.042 lat (usec): min=491, max=37907, avg=23807.92, stdev=3645.09 00:34:10.042 clat percentiles (usec): 00:34:10.042 | 1.00th=[ 1369], 5.00th=[22938], 10.00th=[23200], 20.00th=[23725], 00:34:10.042 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:34:10.042 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25560], 95.00th=[26346], 00:34:10.042 | 99.00th=[26870], 99.50th=[27132], 99.90th=[37487], 99.95th=[37487], 00:34:10.042 | 99.99th=[38011] 00:34:10.042 bw ( KiB/s): min= 2432, max= 3896, per=4.27%, avg=2665.20, stdev=301.52, samples=20 00:34:10.042 iops : min= 608, max= 974, avg=666.30, stdev=75.38, samples=20 00:34:10.042 lat (usec) : 500=0.03%, 750=0.07% 00:34:10.042 lat (msec) : 2=1.44%, 4=0.48%, 10=0.72%, 20=0.03%, 50=97.23% 00:34:10.042 cpu : usr=99.17%, sys=0.45%, ctx=15, majf=0, minf=49 00:34:10.042 IO depths : 1=5.9%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:10.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.042 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.042 issued rwts: total=6679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.042 filename0: (groupid=0, jobs=1): err= 0: pid=1810404: Thu Jul 11 14:04:12 2024 00:34:10.042 read: IOPS=655, BW=2624KiB/s (2687kB/s)(25.7MiB/10023msec) 00:34:10.042 slat (usec): min=6, max=118, avg=37.34, stdev=20.11 00:34:10.042 clat (usec): min=6889, max=37010, avg=24118.12, stdev=1967.24 00:34:10.042 lat (usec): min=6904, max=37051, avg=24155.46, stdev=1968.01 00:34:10.042 clat percentiles (usec): 00:34:10.042 | 1.00th=[13829], 5.00th=[22938], 10.00th=[23200], 20.00th=[23725], 00:34:10.042 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.043 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25822], 95.00th=[26346], 00:34:10.043 | 99.00th=[26870], 99.50th=[27132], 99.90th=[36963], 99.95th=[36963], 00:34:10.043 | 99.99th=[36963] 00:34:10.043 bw ( KiB/s): min= 2432, max= 2816, per=4.20%, avg=2623.20, stdev=95.79, samples=20 00:34:10.043 iops : min= 608, max= 704, avg=655.80, stdev=23.95, samples=20 00:34:10.043 lat (msec) : 10=0.49%, 20=1.16%, 50=98.36% 00:34:10.043 cpu : usr=98.70%, sys=0.77%, ctx=89, majf=0, minf=29 00:34:10.043 IO depths : 1=4.8%, 2=9.8%, 4=21.9%, 8=55.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:34:10.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 issued rwts: total=6574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.043 filename0: (groupid=0, jobs=1): err= 0: pid=1810406: Thu Jul 11 14:04:12 2024 00:34:10.043 read: IOPS=647, BW=2589KiB/s (2651kB/s)(25.3MiB/10007msec) 00:34:10.043 slat (usec): min=5, max=122, avg=41.56, stdev=23.31 00:34:10.043 clat (usec): min=8424, max=47114, avg=24363.41, stdev=2688.65 00:34:10.043 lat (usec): min=8439, max=47131, avg=24404.97, stdev=2687.59 00:34:10.043 clat percentiles (usec): 00:34:10.043 | 1.00th=[15139], 5.00th=[22938], 10.00th=[23462], 20.00th=[23462], 00:34:10.043 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.043 | 70.00th=[24511], 80.00th=[24773], 90.00th=[26084], 95.00th=[26608], 00:34:10.043 | 99.00th=[36963], 99.50th=[41681], 99.90th=[46924], 99.95th=[46924], 00:34:10.043 | 99.99th=[46924] 00:34:10.043 bw ( KiB/s): min= 2432, max= 2688, per=4.14%, avg=2584.63, stdev=84.50, samples=19 00:34:10.043 iops : min= 608, max= 672, avg=646.11, stdev=21.14, samples=19 00:34:10.043 lat (msec) : 10=0.22%, 20=1.67%, 50=98.12% 00:34:10.043 cpu : usr=98.66%, sys=0.78%, ctx=25, majf=0, minf=40 00:34:10.043 IO depths : 1=4.1%, 2=9.1%, 4=20.4%, 8=57.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:34:10.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 complete : 0=0.0%, 4=93.2%, 8=2.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 issued rwts: total=6476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.043 filename0: (groupid=0, jobs=1): err= 0: pid=1810407: Thu Jul 11 14:04:12 2024 00:34:10.043 read: IOPS=648, BW=2593KiB/s (2655kB/s)(25.3MiB/10005msec) 00:34:10.043 slat (usec): min=6, max=122, avg=47.24, stdev=22.03 00:34:10.043 clat (usec): min=12153, max=44804, avg=24275.03, stdev=1660.45 00:34:10.043 lat (usec): min=12160, max=44823, avg=24322.27, stdev=1659.49 00:34:10.043 clat percentiles (usec): 00:34:10.043 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:34:10.043 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.043 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[26084], 00:34:10.043 | 99.00th=[27132], 99.50th=[34866], 99.90th=[44303], 99.95th=[44827], 00:34:10.043 | 99.99th=[44827] 00:34:10.043 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2596.47, stdev=78.96, samples=19 00:34:10.043 iops : min= 608, max= 672, avg=649.11, stdev=19.75, samples=19 00:34:10.043 lat (msec) : 20=0.15%, 50=99.85% 00:34:10.043 cpu : usr=99.04%, sys=0.56%, ctx=29, majf=0, minf=43 00:34:10.043 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:10.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 issued rwts: total=6486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.043 filename0: (groupid=0, jobs=1): err= 0: pid=1810408: Thu Jul 11 14:04:12 2024 00:34:10.043 read: IOPS=652, BW=2611KiB/s (2674kB/s)(25.6MiB/10025msec) 00:34:10.043 slat (usec): min=6, max=130, avg=38.17, stdev=21.91 00:34:10.043 clat (usec): min=6842, max=37439, avg=24235.67, stdev=1704.15 00:34:10.043 lat (usec): min=6857, max=37482, avg=24273.84, stdev=1703.08 00:34:10.043 clat percentiles (usec): 00:34:10.043 | 1.00th=[21890], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:34:10.043 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:34:10.043 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25560], 95.00th=[26346], 00:34:10.043 | 99.00th=[26608], 99.50th=[26870], 99.90th=[36963], 99.95th=[37487], 00:34:10.043 | 99.99th=[37487] 00:34:10.043 bw ( KiB/s): min= 2432, max= 2816, per=4.18%, avg=2610.90, stdev=87.31, samples=20 00:34:10.043 iops : min= 608, max= 704, avg=652.70, stdev=21.84, samples=20 00:34:10.043 lat (msec) : 10=0.49%, 20=0.34%, 50=99.17% 00:34:10.043 cpu : usr=96.05%, sys=2.08%, ctx=710, majf=0, minf=49 00:34:10.043 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:34:10.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 issued rwts: total=6544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.043 filename0: (groupid=0, jobs=1): err= 0: pid=1810409: Thu Jul 11 14:04:12 2024 00:34:10.043 read: IOPS=648, BW=2593KiB/s (2656kB/s)(25.3MiB/10004msec) 00:34:10.043 slat (usec): min=6, max=114, avg=47.72, stdev=20.57 00:34:10.043 clat (usec): min=11487, max=57109, avg=24248.00, stdev=2152.43 00:34:10.043 lat (usec): min=11507, max=57132, avg=24295.72, stdev=2152.26 00:34:10.043 clat percentiles (usec): 00:34:10.043 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:34:10.043 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.043 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[26084], 00:34:10.043 | 99.00th=[26870], 99.50th=[34866], 99.90th=[56886], 99.95th=[56886], 00:34:10.043 | 99.99th=[56886] 00:34:10.043 bw ( KiB/s): min= 2432, max= 2688, per=4.14%, avg=2589.16, stdev=68.51, samples=19 00:34:10.043 iops : min= 608, max= 672, avg=647.26, stdev=17.14, samples=19 00:34:10.043 lat (msec) : 20=0.46%, 50=99.29%, 100=0.25% 00:34:10.043 cpu : usr=96.88%, sys=1.65%, ctx=369, majf=0, minf=44 00:34:10.043 IO depths : 1=5.8%, 2=11.9%, 4=24.8%, 8=50.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:34:10.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 issued rwts: total=6486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.043 filename1: (groupid=0, jobs=1): err= 0: pid=1810410: Thu Jul 11 14:04:12 2024 00:34:10.043 read: IOPS=658, BW=2636KiB/s (2699kB/s)(25.8MiB/10016msec) 00:34:10.043 slat (usec): min=6, max=185, avg=26.42, stdev=22.30 00:34:10.043 clat (usec): min=9083, max=43945, avg=24087.20, stdev=2638.09 00:34:10.043 lat (usec): min=9094, max=43958, avg=24113.62, stdev=2639.32 00:34:10.043 clat percentiles (usec): 00:34:10.043 | 1.00th=[15795], 5.00th=[18744], 10.00th=[22676], 20.00th=[23462], 00:34:10.043 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:34:10.043 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26084], 95.00th=[26870], 00:34:10.043 | 99.00th=[32113], 99.50th=[33817], 99.90th=[41681], 99.95th=[41681], 00:34:10.043 | 99.99th=[43779] 00:34:10.043 bw ( KiB/s): min= 2432, max= 2784, per=4.22%, avg=2636.00, stdev=90.79, samples=20 00:34:10.043 iops : min= 608, max= 696, avg=659.00, stdev=22.70, samples=20 00:34:10.043 lat (msec) : 10=0.06%, 20=7.30%, 50=92.64% 00:34:10.043 cpu : usr=95.22%, sys=2.46%, ctx=89, majf=0, minf=38 00:34:10.043 IO depths : 1=1.9%, 2=4.0%, 4=10.1%, 8=70.6%, 16=13.5%, 32=0.0%, >=64=0.0% 00:34:10.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 complete : 0=0.0%, 4=90.9%, 8=6.1%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 issued rwts: total=6600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.043 filename1: (groupid=0, jobs=1): err= 0: pid=1810411: Thu Jul 11 14:04:12 2024 00:34:10.043 read: IOPS=657, BW=2630KiB/s (2693kB/s)(25.7MiB/10005msec) 00:34:10.043 slat (nsec): min=5983, max=90345, avg=35899.62, stdev=20267.10 00:34:10.043 clat (usec): min=5609, max=45670, avg=24064.97, stdev=2366.06 00:34:10.043 lat (usec): min=5622, max=45687, avg=24100.87, stdev=2368.24 00:34:10.043 clat percentiles (usec): 00:34:10.043 | 1.00th=[13566], 5.00th=[22676], 10.00th=[23462], 20.00th=[23725], 00:34:10.043 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.043 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25822], 95.00th=[26346], 00:34:10.043 | 99.00th=[26870], 99.50th=[28705], 99.90th=[45876], 99.95th=[45876], 00:34:10.043 | 99.99th=[45876] 00:34:10.043 bw ( KiB/s): min= 2432, max= 3072, per=4.20%, avg=2625.63, stdev=128.68, samples=19 00:34:10.043 iops : min= 608, max= 768, avg=656.37, stdev=32.18, samples=19 00:34:10.043 lat (msec) : 10=0.32%, 20=3.50%, 50=96.18% 00:34:10.043 cpu : usr=99.00%, sys=0.62%, ctx=14, majf=0, minf=34 00:34:10.043 IO depths : 1=0.7%, 2=6.7%, 4=24.1%, 8=56.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:34:10.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 issued rwts: total=6579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.043 filename1: (groupid=0, jobs=1): err= 0: pid=1810412: Thu Jul 11 14:04:12 2024 00:34:10.043 read: IOPS=651, BW=2607KiB/s (2669kB/s)(25.5MiB/10002msec) 00:34:10.043 slat (usec): min=6, max=106, avg=41.68, stdev=22.75 00:34:10.043 clat (usec): min=3071, max=52768, avg=24255.45, stdev=2239.88 00:34:10.043 lat (usec): min=3078, max=52806, avg=24297.14, stdev=2241.10 00:34:10.043 clat percentiles (usec): 00:34:10.043 | 1.00th=[15926], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:34:10.043 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:34:10.043 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25560], 95.00th=[26346], 00:34:10.043 | 99.00th=[26870], 99.50th=[31851], 99.90th=[52691], 99.95th=[52691], 00:34:10.043 | 99.99th=[52691] 00:34:10.043 bw ( KiB/s): min= 2448, max= 2688, per=4.16%, avg=2596.21, stdev=73.87, samples=19 00:34:10.043 iops : min= 612, max= 672, avg=649.05, stdev=18.47, samples=19 00:34:10.043 lat (msec) : 4=0.03%, 10=0.46%, 20=0.64%, 50=98.62%, 100=0.25% 00:34:10.043 cpu : usr=98.99%, sys=0.57%, ctx=46, majf=0, minf=54 00:34:10.043 IO depths : 1=0.2%, 2=5.1%, 4=19.8%, 8=61.3%, 16=13.6%, 32=0.0%, >=64=0.0% 00:34:10.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 complete : 0=0.0%, 4=93.3%, 8=2.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.043 issued rwts: total=6518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.043 filename1: (groupid=0, jobs=1): err= 0: pid=1810414: Thu Jul 11 14:04:12 2024 00:34:10.043 read: IOPS=652, BW=2609KiB/s (2671kB/s)(25.5MiB/10027msec) 00:34:10.043 slat (usec): min=6, max=110, avg=42.51, stdev=21.11 00:34:10.043 clat (usec): min=6874, max=50277, avg=24202.65, stdev=1888.84 00:34:10.043 lat (usec): min=6888, max=50310, avg=24245.16, stdev=1890.82 00:34:10.043 clat percentiles (usec): 00:34:10.043 | 1.00th=[19268], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:34:10.043 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.043 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25822], 95.00th=[26346], 00:34:10.044 | 99.00th=[27657], 99.50th=[30016], 99.90th=[36963], 99.95th=[36963], 00:34:10.044 | 99.99th=[50070] 00:34:10.044 bw ( KiB/s): min= 2544, max= 2816, per=4.18%, avg=2610.10, stdev=77.44, samples=20 00:34:10.044 iops : min= 636, max= 704, avg=652.50, stdev=19.38, samples=20 00:34:10.044 lat (msec) : 10=0.49%, 20=0.67%, 50=98.81%, 100=0.03% 00:34:10.044 cpu : usr=98.77%, sys=0.68%, ctx=23, majf=0, minf=77 00:34:10.044 IO depths : 1=2.9%, 2=7.9%, 4=21.9%, 8=57.7%, 16=9.6%, 32=0.0%, >=64=0.0% 00:34:10.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 issued rwts: total=6539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.044 filename1: (groupid=0, jobs=1): err= 0: pid=1810415: Thu Jul 11 14:04:12 2024 00:34:10.044 read: IOPS=652, BW=2609KiB/s (2671kB/s)(25.5MiB/10004msec) 00:34:10.044 slat (usec): min=6, max=106, avg=42.52, stdev=19.27 00:34:10.044 clat (usec): min=4532, max=58464, avg=24167.24, stdev=2337.79 00:34:10.044 lat (usec): min=4555, max=58481, avg=24209.76, stdev=2339.54 00:34:10.044 clat percentiles (usec): 00:34:10.044 | 1.00th=[17171], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:34:10.044 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.044 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25822], 95.00th=[26346], 00:34:10.044 | 99.00th=[30540], 99.50th=[34866], 99.90th=[43779], 99.95th=[43779], 00:34:10.044 | 99.99th=[58459] 00:34:10.044 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2599.26, stdev=74.65, samples=19 00:34:10.044 iops : min= 608, max= 672, avg=649.79, stdev=18.67, samples=19 00:34:10.044 lat (msec) : 10=0.49%, 20=1.44%, 50=98.04%, 100=0.03% 00:34:10.044 cpu : usr=98.90%, sys=0.66%, ctx=54, majf=0, minf=30 00:34:10.044 IO depths : 1=5.2%, 2=10.9%, 4=23.0%, 8=53.3%, 16=7.7%, 32=0.0%, >=64=0.0% 00:34:10.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 issued rwts: total=6524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.044 filename1: (groupid=0, jobs=1): err= 0: pid=1810416: Thu Jul 11 14:04:12 2024 00:34:10.044 read: IOPS=649, BW=2597KiB/s (2659kB/s)(25.4MiB/10006msec) 00:34:10.044 slat (usec): min=6, max=114, avg=45.92, stdev=21.02 00:34:10.044 clat (usec): min=14638, max=43939, avg=24237.08, stdev=1575.80 00:34:10.044 lat (usec): min=14655, max=43957, avg=24283.00, stdev=1576.61 00:34:10.044 clat percentiles (usec): 00:34:10.044 | 1.00th=[21365], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:34:10.044 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.044 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[26346], 00:34:10.044 | 99.00th=[29754], 99.50th=[32637], 99.90th=[41681], 99.95th=[41681], 00:34:10.044 | 99.99th=[43779] 00:34:10.044 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2598.74, stdev=72.80, samples=19 00:34:10.044 iops : min= 608, max= 672, avg=649.68, stdev=18.20, samples=19 00:34:10.044 lat (msec) : 20=0.79%, 50=99.21% 00:34:10.044 cpu : usr=98.98%, sys=0.62%, ctx=16, majf=0, minf=32 00:34:10.044 IO depths : 1=5.0%, 2=11.0%, 4=24.5%, 8=52.0%, 16=7.5%, 32=0.0%, >=64=0.0% 00:34:10.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 issued rwts: total=6496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.044 filename1: (groupid=0, jobs=1): err= 0: pid=1810417: Thu Jul 11 14:04:12 2024 00:34:10.044 read: IOPS=652, BW=2611KiB/s (2674kB/s)(25.6MiB/10025msec) 00:34:10.044 slat (usec): min=6, max=114, avg=37.40, stdev=20.35 00:34:10.044 clat (usec): min=6939, max=40018, avg=24224.16, stdev=1730.89 00:34:10.044 lat (usec): min=6954, max=40036, avg=24261.56, stdev=1730.04 00:34:10.044 clat percentiles (usec): 00:34:10.044 | 1.00th=[21627], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:34:10.044 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:34:10.044 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25560], 95.00th=[26346], 00:34:10.044 | 99.00th=[26870], 99.50th=[26870], 99.90th=[36963], 99.95th=[37487], 00:34:10.044 | 99.99th=[40109] 00:34:10.044 bw ( KiB/s): min= 2432, max= 2816, per=4.18%, avg=2611.20, stdev=87.11, samples=20 00:34:10.044 iops : min= 608, max= 704, avg=652.80, stdev=21.78, samples=20 00:34:10.044 lat (msec) : 10=0.49%, 20=0.28%, 50=99.24% 00:34:10.044 cpu : usr=99.02%, sys=0.59%, ctx=16, majf=0, minf=36 00:34:10.044 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:10.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 issued rwts: total=6544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.044 filename1: (groupid=0, jobs=1): err= 0: pid=1810418: Thu Jul 11 14:04:12 2024 00:34:10.044 read: IOPS=649, BW=2598KiB/s (2660kB/s)(25.4MiB/10002msec) 00:34:10.044 slat (usec): min=7, max=109, avg=47.03, stdev=18.44 00:34:10.044 clat (usec): min=12204, max=57426, avg=24238.80, stdev=1988.25 00:34:10.044 lat (usec): min=12218, max=57448, avg=24285.83, stdev=1988.26 00:34:10.044 clat percentiles (usec): 00:34:10.044 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:34:10.044 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.044 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25297], 95.00th=[26084], 00:34:10.044 | 99.00th=[26608], 99.50th=[26870], 99.90th=[57410], 99.95th=[57410], 00:34:10.044 | 99.99th=[57410] 00:34:10.044 bw ( KiB/s): min= 2432, max= 2688, per=4.15%, avg=2593.37, stdev=72.10, samples=19 00:34:10.044 iops : min= 608, max= 672, avg=648.32, stdev=18.04, samples=19 00:34:10.044 lat (msec) : 20=0.37%, 50=99.38%, 100=0.25% 00:34:10.044 cpu : usr=99.01%, sys=0.59%, ctx=23, majf=0, minf=43 00:34:10.044 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:10.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 issued rwts: total=6496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.044 filename2: (groupid=0, jobs=1): err= 0: pid=1810419: Thu Jul 11 14:04:12 2024 00:34:10.044 read: IOPS=649, BW=2597KiB/s (2660kB/s)(25.4MiB/10004msec) 00:34:10.044 slat (usec): min=6, max=101, avg=43.46, stdev=20.10 00:34:10.044 clat (usec): min=17453, max=44192, avg=24301.02, stdev=1388.61 00:34:10.044 lat (usec): min=17467, max=44233, avg=24344.48, stdev=1387.54 00:34:10.044 clat percentiles (usec): 00:34:10.044 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:34:10.044 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.044 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25560], 95.00th=[26346], 00:34:10.044 | 99.00th=[26870], 99.50th=[27919], 99.90th=[44303], 99.95th=[44303], 00:34:10.044 | 99.99th=[44303] 00:34:10.044 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2600.68, stdev=74.40, samples=19 00:34:10.044 iops : min= 608, max= 672, avg=650.16, stdev=18.61, samples=19 00:34:10.044 lat (msec) : 20=0.31%, 50=99.69% 00:34:10.044 cpu : usr=98.85%, sys=0.61%, ctx=80, majf=0, minf=41 00:34:10.044 IO depths : 1=5.2%, 2=10.5%, 4=22.4%, 8=54.5%, 16=7.4%, 32=0.0%, >=64=0.0% 00:34:10.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 issued rwts: total=6496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.044 filename2: (groupid=0, jobs=1): err= 0: pid=1810420: Thu Jul 11 14:04:12 2024 00:34:10.044 read: IOPS=649, BW=2597KiB/s (2660kB/s)(25.4MiB/10004msec) 00:34:10.044 slat (usec): min=7, max=109, avg=47.77, stdev=16.94 00:34:10.044 clat (usec): min=21504, max=44324, avg=24236.90, stdev=1312.97 00:34:10.044 lat (usec): min=21519, max=44348, avg=24284.66, stdev=1311.69 00:34:10.044 clat percentiles (usec): 00:34:10.044 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:34:10.044 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.044 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[26084], 00:34:10.044 | 99.00th=[26608], 99.50th=[27132], 99.90th=[44303], 99.95th=[44303], 00:34:10.044 | 99.99th=[44303] 00:34:10.044 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2600.42, stdev=74.55, samples=19 00:34:10.044 iops : min= 608, max= 672, avg=650.11, stdev=18.64, samples=19 00:34:10.044 lat (msec) : 50=100.00% 00:34:10.044 cpu : usr=97.97%, sys=1.08%, ctx=103, majf=0, minf=46 00:34:10.044 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:10.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 issued rwts: total=6496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.044 filename2: (groupid=0, jobs=1): err= 0: pid=1810421: Thu Jul 11 14:04:12 2024 00:34:10.044 read: IOPS=650, BW=2600KiB/s (2663kB/s)(25.4MiB/10005msec) 00:34:10.044 slat (usec): min=6, max=125, avg=43.90, stdev=23.15 00:34:10.044 clat (usec): min=4446, max=54234, avg=24196.29, stdev=2687.87 00:34:10.044 lat (usec): min=4452, max=54253, avg=24240.19, stdev=2688.18 00:34:10.044 clat percentiles (usec): 00:34:10.044 | 1.00th=[15401], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:34:10.044 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.044 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25560], 95.00th=[26346], 00:34:10.044 | 99.00th=[31851], 99.50th=[42206], 99.90th=[54264], 99.95th=[54264], 00:34:10.044 | 99.99th=[54264] 00:34:10.044 bw ( KiB/s): min= 2432, max= 2736, per=4.15%, avg=2590.53, stdev=91.75, samples=19 00:34:10.044 iops : min= 608, max= 684, avg=647.63, stdev=22.94, samples=19 00:34:10.044 lat (msec) : 10=0.49%, 20=1.40%, 50=97.86%, 100=0.25% 00:34:10.044 cpu : usr=98.87%, sys=0.63%, ctx=45, majf=0, minf=41 00:34:10.044 IO depths : 1=5.5%, 2=11.3%, 4=23.8%, 8=52.1%, 16=7.2%, 32=0.0%, >=64=0.0% 00:34:10.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.044 issued rwts: total=6504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.044 filename2: (groupid=0, jobs=1): err= 0: pid=1810422: Thu Jul 11 14:04:12 2024 00:34:10.044 read: IOPS=654, BW=2617KiB/s (2680kB/s)(25.6MiB/10004msec) 00:34:10.044 slat (usec): min=6, max=112, avg=44.43, stdev=20.41 00:34:10.044 clat (usec): min=3397, max=43873, avg=24067.20, stdev=2099.16 00:34:10.044 lat (usec): min=3404, max=43890, avg=24111.63, stdev=2102.20 00:34:10.044 clat percentiles (usec): 00:34:10.044 | 1.00th=[16581], 5.00th=[22938], 10.00th=[23462], 20.00th=[23462], 00:34:10.044 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.044 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25297], 95.00th=[26084], 00:34:10.044 | 99.00th=[26608], 99.50th=[29754], 99.90th=[43779], 99.95th=[43779], 00:34:10.044 | 99.99th=[43779] 00:34:10.044 bw ( KiB/s): min= 2432, max= 2784, per=4.18%, avg=2608.53, stdev=83.12, samples=19 00:34:10.044 iops : min= 608, max= 696, avg=652.11, stdev=20.80, samples=19 00:34:10.044 lat (msec) : 4=0.03%, 10=0.43%, 20=1.53%, 50=98.01% 00:34:10.045 cpu : usr=98.67%, sys=0.75%, ctx=96, majf=0, minf=31 00:34:10.045 IO depths : 1=4.7%, 2=10.4%, 4=23.0%, 8=53.7%, 16=8.2%, 32=0.0%, >=64=0.0% 00:34:10.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.045 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.045 issued rwts: total=6546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.045 filename2: (groupid=0, jobs=1): err= 0: pid=1810423: Thu Jul 11 14:04:12 2024 00:34:10.045 read: IOPS=654, BW=2617KiB/s (2680kB/s)(25.6MiB/10025msec) 00:34:10.045 slat (usec): min=6, max=121, avg=31.43, stdev=22.24 00:34:10.045 clat (usec): min=4203, max=37534, avg=24215.20, stdev=1899.15 00:34:10.045 lat (usec): min=4211, max=37547, avg=24246.62, stdev=1898.55 00:34:10.045 clat percentiles (usec): 00:34:10.045 | 1.00th=[17957], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:34:10.045 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:34:10.045 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25560], 95.00th=[26346], 00:34:10.045 | 99.00th=[26608], 99.50th=[26870], 99.90th=[37487], 99.95th=[37487], 00:34:10.045 | 99.99th=[37487] 00:34:10.045 bw ( KiB/s): min= 2432, max= 2944, per=4.19%, avg=2617.30, stdev=105.85, samples=20 00:34:10.045 iops : min= 608, max= 736, avg=654.30, stdev=26.48, samples=20 00:34:10.045 lat (msec) : 10=0.70%, 20=0.34%, 50=98.96% 00:34:10.045 cpu : usr=98.69%, sys=0.72%, ctx=20, majf=0, minf=46 00:34:10.045 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:10.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.045 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.045 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.045 filename2: (groupid=0, jobs=1): err= 0: pid=1810424: Thu Jul 11 14:04:12 2024 00:34:10.045 read: IOPS=649, BW=2597KiB/s (2659kB/s)(25.5MiB/10041msec) 00:34:10.045 slat (usec): min=5, max=123, avg=41.65, stdev=23.18 00:34:10.045 clat (usec): min=8483, max=45895, avg=24245.49, stdev=2116.04 00:34:10.045 lat (usec): min=8498, max=45912, avg=24287.14, stdev=2116.02 00:34:10.045 clat percentiles (usec): 00:34:10.045 | 1.00th=[16188], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:34:10.045 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.045 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25822], 95.00th=[26346], 00:34:10.045 | 99.00th=[28705], 99.50th=[34341], 99.90th=[45876], 99.95th=[45876], 00:34:10.045 | 99.99th=[45876] 00:34:10.045 bw ( KiB/s): min= 2432, max= 2688, per=4.17%, avg=2602.05, stdev=68.48, samples=19 00:34:10.045 iops : min= 608, max= 672, avg=650.47, stdev=17.15, samples=19 00:34:10.045 lat (msec) : 10=0.21%, 20=1.14%, 50=98.65% 00:34:10.045 cpu : usr=98.94%, sys=0.63%, ctx=33, majf=0, minf=35 00:34:10.045 IO depths : 1=2.1%, 2=7.2%, 4=20.8%, 8=58.5%, 16=11.4%, 32=0.0%, >=64=0.0% 00:34:10.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.045 complete : 0=0.0%, 4=93.4%, 8=1.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.045 issued rwts: total=6518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.045 filename2: (groupid=0, jobs=1): err= 0: pid=1810425: Thu Jul 11 14:04:12 2024 00:34:10.045 read: IOPS=652, BW=2612KiB/s (2674kB/s)(25.5MiB/10007msec) 00:34:10.045 slat (usec): min=6, max=114, avg=36.00, stdev=20.25 00:34:10.045 clat (usec): min=5959, max=69901, avg=24226.40, stdev=3087.37 00:34:10.045 lat (usec): min=5987, max=69930, avg=24262.41, stdev=3088.68 00:34:10.045 clat percentiles (usec): 00:34:10.045 | 1.00th=[14091], 5.00th=[22152], 10.00th=[23200], 20.00th=[23725], 00:34:10.045 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.045 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25822], 95.00th=[26608], 00:34:10.045 | 99.00th=[33424], 99.50th=[38011], 99.90th=[61080], 99.95th=[61080], 00:34:10.045 | 99.99th=[69731] 00:34:10.045 bw ( KiB/s): min= 2432, max= 2704, per=4.18%, avg=2609.68, stdev=76.53, samples=19 00:34:10.045 iops : min= 608, max= 676, avg=652.42, stdev=19.13, samples=19 00:34:10.045 lat (msec) : 10=0.18%, 20=3.76%, 50=95.81%, 100=0.24% 00:34:10.045 cpu : usr=99.04%, sys=0.54%, ctx=54, majf=0, minf=45 00:34:10.045 IO depths : 1=4.3%, 2=9.2%, 4=21.1%, 8=56.5%, 16=8.9%, 32=0.0%, >=64=0.0% 00:34:10.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.045 complete : 0=0.0%, 4=93.3%, 8=1.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.045 issued rwts: total=6534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.045 filename2: (groupid=0, jobs=1): err= 0: pid=1810426: Thu Jul 11 14:04:12 2024 00:34:10.045 read: IOPS=653, BW=2615KiB/s (2678kB/s)(25.6MiB/10006msec) 00:34:10.045 slat (usec): min=6, max=121, avg=41.05, stdev=23.24 00:34:10.045 clat (usec): min=11892, max=45124, avg=24101.76, stdev=2153.12 00:34:10.045 lat (usec): min=11900, max=45141, avg=24142.82, stdev=2153.82 00:34:10.045 clat percentiles (usec): 00:34:10.045 | 1.00th=[13566], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:34:10.045 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:34:10.045 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25822], 95.00th=[26346], 00:34:10.045 | 99.00th=[28181], 99.50th=[33424], 99.90th=[44827], 99.95th=[44827], 00:34:10.045 | 99.99th=[45351] 00:34:10.045 bw ( KiB/s): min= 2432, max= 2912, per=4.19%, avg=2619.79, stdev=103.12, samples=19 00:34:10.045 iops : min= 608, max= 728, avg=654.95, stdev=25.78, samples=19 00:34:10.045 lat (msec) : 20=2.51%, 50=97.49% 00:34:10.045 cpu : usr=98.20%, sys=1.01%, ctx=52, majf=0, minf=40 00:34:10.045 IO depths : 1=5.1%, 2=10.5%, 4=22.1%, 8=54.7%, 16=7.6%, 32=0.0%, >=64=0.0% 00:34:10.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.045 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.045 issued rwts: total=6542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:10.045 00:34:10.045 Run status group 0 (all jobs): 00:34:10.045 READ: bw=61.0MiB/s (64.0MB/s), 2589KiB/s-2666KiB/s (2651kB/s-2730kB/s), io=612MiB (642MB), run=10002-10041msec 00:34:10.305 14:04:12 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:10.305 14:04:12 -- target/dif.sh@43 -- # local sub 00:34:10.305 14:04:12 -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.305 14:04:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:10.305 14:04:12 -- target/dif.sh@36 -- # local sub_id=0 00:34:10.305 14:04:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:10.305 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.305 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.305 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.305 14:04:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:10.305 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.305 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.305 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.305 14:04:12 -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.305 14:04:12 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:10.305 14:04:12 -- target/dif.sh@36 -- # local sub_id=1 00:34:10.305 14:04:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:10.305 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.305 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.305 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.305 14:04:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:10.305 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.305 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.305 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.305 14:04:12 -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.305 14:04:12 -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:10.305 14:04:12 -- target/dif.sh@36 -- # local sub_id=2 00:34:10.305 14:04:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:10.305 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.305 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.305 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.305 14:04:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:10.305 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.305 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.305 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.305 14:04:12 -- target/dif.sh@115 -- # NULL_DIF=1 00:34:10.305 14:04:12 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:10.305 14:04:12 -- target/dif.sh@115 -- # numjobs=2 00:34:10.305 14:04:12 -- target/dif.sh@115 -- # iodepth=8 00:34:10.305 14:04:12 -- target/dif.sh@115 -- # runtime=5 00:34:10.305 14:04:12 -- target/dif.sh@115 -- # files=1 00:34:10.305 14:04:12 -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:10.305 14:04:12 -- target/dif.sh@28 -- # local sub 00:34:10.305 14:04:12 -- target/dif.sh@30 -- # for sub in "$@" 00:34:10.305 14:04:12 -- target/dif.sh@31 -- # create_subsystem 0 00:34:10.305 14:04:12 -- target/dif.sh@18 -- # local sub_id=0 00:34:10.305 14:04:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:10.305 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.305 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.305 bdev_null0 00:34:10.305 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.305 14:04:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:10.305 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.305 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.305 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.305 14:04:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:10.305 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.305 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.305 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.305 14:04:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:10.305 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.305 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.305 [2024-07-11 14:04:12.733326] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.305 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.305 14:04:12 -- target/dif.sh@30 -- # for sub in "$@" 00:34:10.305 14:04:12 -- target/dif.sh@31 -- # create_subsystem 1 00:34:10.305 14:04:12 -- target/dif.sh@18 -- # local sub_id=1 00:34:10.305 14:04:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:10.305 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.306 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.306 bdev_null1 00:34:10.306 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.306 14:04:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:10.306 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.306 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.306 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.306 14:04:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:10.306 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.306 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.565 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.565 14:04:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.565 14:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:10.565 14:04:12 -- common/autotest_common.sh@10 -- # set +x 00:34:10.565 14:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:10.565 14:04:12 -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:10.565 14:04:12 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:10.565 14:04:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:10.565 14:04:12 -- nvmf/common.sh@520 -- # config=() 00:34:10.565 14:04:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.565 14:04:12 -- nvmf/common.sh@520 -- # local subsystem config 00:34:10.565 14:04:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:10.565 14:04:12 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.565 14:04:12 -- target/dif.sh@82 -- # gen_fio_conf 00:34:10.565 14:04:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:10.565 { 00:34:10.565 "params": { 00:34:10.565 "name": "Nvme$subsystem", 00:34:10.565 "trtype": "$TEST_TRANSPORT", 00:34:10.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.565 "adrfam": "ipv4", 00:34:10.565 "trsvcid": "$NVMF_PORT", 00:34:10.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.565 "hdgst": ${hdgst:-false}, 00:34:10.565 "ddgst": ${ddgst:-false} 00:34:10.565 }, 00:34:10.565 "method": "bdev_nvme_attach_controller" 00:34:10.565 } 00:34:10.565 EOF 00:34:10.565 )") 00:34:10.565 14:04:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:10.565 14:04:12 -- target/dif.sh@54 -- # local file 00:34:10.565 14:04:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:10.565 14:04:12 -- target/dif.sh@56 -- # cat 00:34:10.565 14:04:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:10.565 14:04:12 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.565 14:04:12 -- common/autotest_common.sh@1320 -- # shift 00:34:10.565 14:04:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:10.565 14:04:12 -- nvmf/common.sh@542 -- # cat 00:34:10.565 14:04:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.565 14:04:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.565 14:04:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:10.565 14:04:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:10.565 14:04:12 -- target/dif.sh@72 -- # (( file <= files )) 00:34:10.565 14:04:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:10.565 14:04:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:10.565 14:04:12 -- target/dif.sh@73 -- # cat 00:34:10.565 14:04:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:10.565 { 00:34:10.565 "params": { 00:34:10.565 "name": "Nvme$subsystem", 00:34:10.565 "trtype": "$TEST_TRANSPORT", 00:34:10.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.565 "adrfam": "ipv4", 00:34:10.565 "trsvcid": "$NVMF_PORT", 00:34:10.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.565 "hdgst": ${hdgst:-false}, 00:34:10.565 "ddgst": ${ddgst:-false} 00:34:10.565 }, 00:34:10.565 "method": "bdev_nvme_attach_controller" 00:34:10.565 } 00:34:10.565 EOF 00:34:10.565 )") 00:34:10.565 14:04:12 -- nvmf/common.sh@542 -- # cat 00:34:10.565 14:04:12 -- target/dif.sh@72 -- # (( file++ )) 00:34:10.565 14:04:12 -- target/dif.sh@72 -- # (( file <= files )) 00:34:10.565 14:04:12 -- nvmf/common.sh@544 -- # jq . 00:34:10.565 14:04:12 -- nvmf/common.sh@545 -- # IFS=, 00:34:10.565 14:04:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:10.565 "params": { 00:34:10.565 "name": "Nvme0", 00:34:10.565 "trtype": "tcp", 00:34:10.565 "traddr": "10.0.0.2", 00:34:10.565 "adrfam": "ipv4", 00:34:10.565 "trsvcid": "4420", 00:34:10.565 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.565 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:10.565 "hdgst": false, 00:34:10.565 "ddgst": false 00:34:10.565 }, 00:34:10.565 "method": "bdev_nvme_attach_controller" 00:34:10.565 },{ 00:34:10.565 "params": { 00:34:10.565 "name": "Nvme1", 00:34:10.565 "trtype": "tcp", 00:34:10.565 "traddr": "10.0.0.2", 00:34:10.565 "adrfam": "ipv4", 00:34:10.565 "trsvcid": "4420", 00:34:10.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:10.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:10.565 "hdgst": false, 00:34:10.565 "ddgst": false 00:34:10.565 }, 00:34:10.565 "method": "bdev_nvme_attach_controller" 00:34:10.565 }' 00:34:10.565 14:04:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:10.565 14:04:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:10.565 14:04:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.565 14:04:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:10.565 14:04:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:10.565 14:04:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:10.565 14:04:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:10.565 14:04:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:10.565 14:04:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:10.565 14:04:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.824 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:10.824 ... 00:34:10.824 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:10.824 ... 00:34:10.824 fio-3.35 00:34:10.824 Starting 4 threads 00:34:10.824 EAL: No free 2048 kB hugepages reported on node 1 00:34:11.391 [2024-07-11 14:04:13.754461] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:11.391 [2024-07-11 14:04:13.754502] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:16.663 00:34:16.663 filename0: (groupid=0, jobs=1): err= 0: pid=1812398: Thu Jul 11 14:04:18 2024 00:34:16.663 read: IOPS=2677, BW=20.9MiB/s (21.9MB/s)(105MiB/5003msec) 00:34:16.663 slat (nsec): min=6137, max=56186, avg=15620.38, stdev=7933.35 00:34:16.663 clat (usec): min=1101, max=5461, avg=2940.61, stdev=450.96 00:34:16.663 lat (usec): min=1122, max=5517, avg=2956.23, stdev=450.93 00:34:16.663 clat percentiles (usec): 00:34:16.663 | 1.00th=[ 1942], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2638], 00:34:16.663 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2933], 00:34:16.663 | 70.00th=[ 3032], 80.00th=[ 3195], 90.00th=[ 3359], 95.00th=[ 3949], 00:34:16.663 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 5014], 99.95th=[ 5211], 00:34:16.663 | 99.99th=[ 5342] 00:34:16.663 bw ( KiB/s): min=20896, max=22128, per=25.09%, avg=21423.40, stdev=344.98, samples=10 00:34:16.663 iops : min= 2612, max= 2766, avg=2677.90, stdev=43.12, samples=10 00:34:16.663 lat (msec) : 2=1.31%, 4=94.07%, 10=4.61% 00:34:16.663 cpu : usr=97.52%, sys=1.96%, ctx=60, majf=0, minf=51 00:34:16.663 IO depths : 1=0.2%, 2=5.3%, 4=67.4%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.663 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.663 issued rwts: total=13395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.663 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:16.663 filename0: (groupid=0, jobs=1): err= 0: pid=1812399: Thu Jul 11 14:04:18 2024 00:34:16.663 read: IOPS=2646, BW=20.7MiB/s (21.7MB/s)(103MiB/5002msec) 00:34:16.663 slat (nsec): min=4044, max=71609, avg=14649.92, stdev=10589.91 00:34:16.663 clat (usec): min=819, max=5853, avg=2981.15, stdev=461.54 00:34:16.663 lat (usec): min=831, max=5885, avg=2995.80, stdev=462.01 00:34:16.663 clat percentiles (usec): 00:34:16.663 | 1.00th=[ 2024], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2704], 00:34:16.663 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:34:16.663 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3949], 00:34:16.663 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5538], 99.95th=[ 5604], 00:34:16.663 | 99.99th=[ 5800] 00:34:16.663 bw ( KiB/s): min=20400, max=22080, per=24.79%, avg=21168.80, stdev=542.91, samples=10 00:34:16.663 iops : min= 2550, max= 2760, avg=2646.10, stdev=67.86, samples=10 00:34:16.663 lat (usec) : 1000=0.03% 00:34:16.663 lat (msec) : 2=0.89%, 4=94.48%, 10=4.59% 00:34:16.663 cpu : usr=97.48%, sys=2.18%, ctx=7, majf=0, minf=52 00:34:16.663 IO depths : 1=0.4%, 2=5.1%, 4=65.5%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.663 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.663 issued rwts: total=13236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.663 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:16.663 filename1: (groupid=0, jobs=1): err= 0: pid=1812400: Thu Jul 11 14:04:18 2024 00:34:16.663 read: IOPS=2701, BW=21.1MiB/s (22.1MB/s)(106MiB/5002msec) 00:34:16.663 slat (nsec): min=4050, max=65499, avg=14102.66, stdev=10308.27 00:34:16.663 clat (usec): min=1016, max=8522, avg=2917.65, stdev=430.54 00:34:16.663 lat (usec): min=1033, max=8541, avg=2931.75, stdev=431.29 00:34:16.663 clat percentiles (usec): 00:34:16.663 | 1.00th=[ 1975], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2638], 00:34:16.663 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2933], 00:34:16.663 | 70.00th=[ 3032], 80.00th=[ 3163], 90.00th=[ 3359], 95.00th=[ 3752], 00:34:16.663 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 5145], 99.95th=[ 7308], 00:34:16.663 | 99.99th=[ 7308] 00:34:16.663 bw ( KiB/s): min=21264, max=22192, per=25.40%, avg=21683.56, stdev=294.35, samples=9 00:34:16.663 iops : min= 2658, max= 2774, avg=2710.44, stdev=36.79, samples=9 00:34:16.663 lat (msec) : 2=1.17%, 4=95.91%, 10=2.92% 00:34:16.663 cpu : usr=97.30%, sys=2.34%, ctx=10, majf=0, minf=22 00:34:16.663 IO depths : 1=0.4%, 2=6.4%, 4=66.0%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.663 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.663 issued rwts: total=13512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.663 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:16.663 filename1: (groupid=0, jobs=1): err= 0: pid=1812401: Thu Jul 11 14:04:18 2024 00:34:16.663 read: IOPS=2649, BW=20.7MiB/s (21.7MB/s)(104MiB/5002msec) 00:34:16.663 slat (nsec): min=5914, max=71668, avg=14171.20, stdev=10510.44 00:34:16.663 clat (usec): min=977, max=5937, avg=2976.84, stdev=443.99 00:34:16.663 lat (usec): min=994, max=5961, avg=2991.01, stdev=444.50 00:34:16.663 clat percentiles (usec): 00:34:16.663 | 1.00th=[ 2040], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2704], 00:34:16.663 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:34:16.663 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3425], 95.00th=[ 3916], 00:34:16.663 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 5276], 99.95th=[ 5473], 00:34:16.663 | 99.99th=[ 5604] 00:34:16.663 bw ( KiB/s): min=20681, max=21920, per=24.83%, avg=21200.20, stdev=378.24, samples=10 00:34:16.663 iops : min= 2585, max= 2740, avg=2650.00, stdev=47.31, samples=10 00:34:16.663 lat (usec) : 1000=0.01% 00:34:16.663 lat (msec) : 2=0.79%, 4=94.90%, 10=4.30% 00:34:16.663 cpu : usr=97.18%, sys=2.46%, ctx=6, majf=0, minf=54 00:34:16.663 IO depths : 1=0.2%, 2=4.1%, 4=68.3%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.663 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.663 issued rwts: total=13253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.663 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:16.663 00:34:16.663 Run status group 0 (all jobs): 00:34:16.663 READ: bw=83.4MiB/s (87.4MB/s), 20.7MiB/s-21.1MiB/s (21.7MB/s-22.1MB/s), io=417MiB (437MB), run=5002-5003msec 00:34:16.663 14:04:19 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:16.663 14:04:19 -- target/dif.sh@43 -- # local sub 00:34:16.663 14:04:19 -- target/dif.sh@45 -- # for sub in "$@" 00:34:16.663 14:04:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:16.663 14:04:19 -- target/dif.sh@36 -- # local sub_id=0 00:34:16.663 14:04:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:16.663 14:04:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.663 14:04:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.663 14:04:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.663 14:04:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:16.663 14:04:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.663 14:04:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.663 14:04:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.663 14:04:19 -- target/dif.sh@45 -- # for sub in "$@" 00:34:16.663 14:04:19 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:16.663 14:04:19 -- target/dif.sh@36 -- # local sub_id=1 00:34:16.663 14:04:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:16.663 14:04:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.663 14:04:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.663 14:04:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.663 14:04:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:16.663 14:04:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.663 14:04:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.663 14:04:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.663 00:34:16.663 real 0m24.077s 00:34:16.663 user 4m51.368s 00:34:16.663 sys 0m3.860s 00:34:16.663 14:04:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:16.663 14:04:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.663 ************************************ 00:34:16.663 END TEST fio_dif_rand_params 00:34:16.663 ************************************ 00:34:16.663 14:04:19 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:16.663 14:04:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:16.663 14:04:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:16.663 14:04:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.663 ************************************ 00:34:16.663 START TEST fio_dif_digest 00:34:16.663 ************************************ 00:34:16.663 14:04:19 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:34:16.663 14:04:19 -- target/dif.sh@123 -- # local NULL_DIF 00:34:16.663 14:04:19 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:16.663 14:04:19 -- target/dif.sh@125 -- # local hdgst ddgst 00:34:16.663 14:04:19 -- target/dif.sh@127 -- # NULL_DIF=3 00:34:16.663 14:04:19 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:16.663 14:04:19 -- target/dif.sh@127 -- # numjobs=3 00:34:16.663 14:04:19 -- target/dif.sh@127 -- # iodepth=3 00:34:16.663 14:04:19 -- target/dif.sh@127 -- # runtime=10 00:34:16.663 14:04:19 -- target/dif.sh@128 -- # hdgst=true 00:34:16.663 14:04:19 -- target/dif.sh@128 -- # ddgst=true 00:34:16.663 14:04:19 -- target/dif.sh@130 -- # create_subsystems 0 00:34:16.663 14:04:19 -- target/dif.sh@28 -- # local sub 00:34:16.663 14:04:19 -- target/dif.sh@30 -- # for sub in "$@" 00:34:16.663 14:04:19 -- target/dif.sh@31 -- # create_subsystem 0 00:34:16.663 14:04:19 -- target/dif.sh@18 -- # local sub_id=0 00:34:16.663 14:04:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:16.663 14:04:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.663 14:04:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.922 bdev_null0 00:34:16.922 14:04:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.922 14:04:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:16.922 14:04:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.922 14:04:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.922 14:04:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.922 14:04:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:16.922 14:04:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.922 14:04:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.922 14:04:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.922 14:04:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:16.922 14:04:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:16.922 14:04:19 -- common/autotest_common.sh@10 -- # set +x 00:34:16.922 [2024-07-11 14:04:19.140684] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.922 14:04:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:16.922 14:04:19 -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:16.922 14:04:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.922 14:04:19 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:16.922 14:04:19 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:16.922 14:04:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:16.922 14:04:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:16.922 14:04:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:16.922 14:04:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:16.922 14:04:19 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:16.922 14:04:19 -- common/autotest_common.sh@1320 -- # shift 00:34:16.922 14:04:19 -- target/dif.sh@82 -- # gen_fio_conf 00:34:16.922 14:04:19 -- nvmf/common.sh@520 -- # config=() 00:34:16.922 14:04:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:16.922 14:04:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:16.922 14:04:19 -- nvmf/common.sh@520 -- # local subsystem config 00:34:16.922 14:04:19 -- target/dif.sh@54 -- # local file 00:34:16.922 14:04:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:16.922 14:04:19 -- target/dif.sh@56 -- # cat 00:34:16.922 14:04:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:16.922 { 00:34:16.922 "params": { 00:34:16.922 "name": "Nvme$subsystem", 00:34:16.922 "trtype": "$TEST_TRANSPORT", 00:34:16.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:16.922 "adrfam": "ipv4", 00:34:16.922 "trsvcid": "$NVMF_PORT", 00:34:16.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:16.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:16.922 "hdgst": ${hdgst:-false}, 00:34:16.922 "ddgst": ${ddgst:-false} 00:34:16.922 }, 00:34:16.922 "method": "bdev_nvme_attach_controller" 00:34:16.922 } 00:34:16.922 EOF 00:34:16.922 )") 00:34:16.922 14:04:19 -- nvmf/common.sh@542 -- # cat 00:34:16.922 14:04:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:16.922 14:04:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:16.922 14:04:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:16.922 14:04:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:16.922 14:04:19 -- target/dif.sh@72 -- # (( file <= files )) 00:34:16.922 14:04:19 -- nvmf/common.sh@544 -- # jq . 00:34:16.922 14:04:19 -- nvmf/common.sh@545 -- # IFS=, 00:34:16.922 14:04:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:16.922 "params": { 00:34:16.922 "name": "Nvme0", 00:34:16.922 "trtype": "tcp", 00:34:16.922 "traddr": "10.0.0.2", 00:34:16.922 "adrfam": "ipv4", 00:34:16.922 "trsvcid": "4420", 00:34:16.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:16.922 "hdgst": true, 00:34:16.922 "ddgst": true 00:34:16.922 }, 00:34:16.922 "method": "bdev_nvme_attach_controller" 00:34:16.922 }' 00:34:16.922 14:04:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:16.922 14:04:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:16.922 14:04:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:16.922 14:04:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:16.922 14:04:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:16.922 14:04:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:16.922 14:04:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:16.922 14:04:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:16.922 14:04:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:16.922 14:04:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:17.180 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:17.180 ... 00:34:17.180 fio-3.35 00:34:17.181 Starting 3 threads 00:34:17.181 EAL: No free 2048 kB hugepages reported on node 1 00:34:17.439 [2024-07-11 14:04:19.841404] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:17.439 [2024-07-11 14:04:19.841446] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:29.643 00:34:29.643 filename0: (groupid=0, jobs=1): err= 0: pid=1813530: Thu Jul 11 14:04:30 2024 00:34:29.643 read: IOPS=280, BW=35.0MiB/s (36.7MB/s)(351MiB/10004msec) 00:34:29.643 slat (nsec): min=7069, max=54181, avg=20148.21, stdev=6442.39 00:34:29.643 clat (usec): min=4752, max=15317, avg=10679.14, stdev=837.81 00:34:29.643 lat (usec): min=4762, max=15326, avg=10699.29, stdev=837.88 00:34:29.643 clat percentiles (usec): 00:34:29.643 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:34:29.643 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:34:29.643 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:34:29.643 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13698], 99.95th=[13698], 00:34:29.643 | 99.99th=[15270] 00:34:29.643 bw ( KiB/s): min=33792, max=37632, per=32.66%, avg=35813.05, stdev=914.68, samples=19 00:34:29.643 iops : min= 264, max= 294, avg=279.79, stdev= 7.15, samples=19 00:34:29.643 lat (msec) : 10=19.58%, 20=80.42% 00:34:29.643 cpu : usr=96.59%, sys=3.04%, ctx=25, majf=0, minf=196 00:34:29.643 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.643 issued rwts: total=2804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.643 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:29.643 filename0: (groupid=0, jobs=1): err= 0: pid=1813531: Thu Jul 11 14:04:30 2024 00:34:29.643 read: IOPS=284, BW=35.5MiB/s (37.2MB/s)(357MiB/10048msec) 00:34:29.643 slat (nsec): min=4353, max=49115, avg=16615.06, stdev=7239.56 00:34:29.643 clat (usec): min=7818, max=48847, avg=10523.16, stdev=1295.33 00:34:29.643 lat (usec): min=7832, max=48862, avg=10539.78, stdev=1294.92 00:34:29.643 clat percentiles (usec): 00:34:29.643 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:34:29.643 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:34:29.643 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:34:29.643 | 99.00th=[12649], 99.50th=[13173], 99.90th=[14484], 99.95th=[49021], 00:34:29.643 | 99.99th=[49021] 00:34:29.643 bw ( KiB/s): min=34629, max=37632, per=33.31%, avg=36521.85, stdev=802.04, samples=20 00:34:29.643 iops : min= 270, max= 294, avg=285.30, stdev= 6.33, samples=20 00:34:29.643 lat (msec) : 10=27.08%, 20=72.85%, 50=0.07% 00:34:29.643 cpu : usr=95.80%, sys=3.85%, ctx=27, majf=0, minf=178 00:34:29.643 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.643 issued rwts: total=2855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.643 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:29.643 filename0: (groupid=0, jobs=1): err= 0: pid=1813532: Thu Jul 11 14:04:30 2024 00:34:29.643 read: IOPS=293, BW=36.7MiB/s (38.5MB/s)(369MiB/10045msec) 00:34:29.643 slat (nsec): min=2813, max=43701, avg=16363.96, stdev=7347.58 00:34:29.643 clat (usec): min=7620, max=51352, avg=10187.18, stdev=1304.38 00:34:29.643 lat (usec): min=7642, max=51380, avg=10203.54, stdev=1304.40 00:34:29.643 clat percentiles (usec): 00:34:29.643 | 1.00th=[ 8291], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:34:29.643 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:34:29.643 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:34:29.643 | 99.00th=[12256], 99.50th=[12780], 99.90th=[15533], 99.95th=[47449], 00:34:29.643 | 99.99th=[51119] 00:34:29.643 bw ( KiB/s): min=35072, max=39168, per=34.39%, avg=37708.80, stdev=972.52, samples=20 00:34:29.643 iops : min= 274, max= 306, avg=294.60, stdev= 7.60, samples=20 00:34:29.643 lat (msec) : 10=41.42%, 20=58.51%, 50=0.03%, 100=0.03% 00:34:29.643 cpu : usr=96.15%, sys=3.51%, ctx=20, majf=0, minf=106 00:34:29.643 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.643 issued rwts: total=2948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.643 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:29.643 00:34:29.643 Run status group 0 (all jobs): 00:34:29.643 READ: bw=107MiB/s (112MB/s), 35.0MiB/s-36.7MiB/s (36.7MB/s-38.5MB/s), io=1076MiB (1128MB), run=10004-10048msec 00:34:29.643 14:04:30 -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:29.643 14:04:30 -- target/dif.sh@43 -- # local sub 00:34:29.643 14:04:30 -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.643 14:04:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:29.643 14:04:30 -- target/dif.sh@36 -- # local sub_id=0 00:34:29.643 14:04:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:29.643 14:04:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.643 14:04:30 -- common/autotest_common.sh@10 -- # set +x 00:34:29.643 14:04:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.643 14:04:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:29.643 14:04:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.643 14:04:30 -- common/autotest_common.sh@10 -- # set +x 00:34:29.643 14:04:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.643 00:34:29.643 real 0m11.072s 00:34:29.643 user 0m35.850s 00:34:29.643 sys 0m1.382s 00:34:29.643 14:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:29.643 14:04:30 -- common/autotest_common.sh@10 -- # set +x 00:34:29.643 ************************************ 00:34:29.643 END TEST fio_dif_digest 00:34:29.643 ************************************ 00:34:29.643 14:04:30 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:29.643 14:04:30 -- target/dif.sh@147 -- # nvmftestfini 00:34:29.643 14:04:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:29.643 14:04:30 -- nvmf/common.sh@116 -- # sync 00:34:29.643 14:04:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:29.643 14:04:30 -- nvmf/common.sh@119 -- # set +e 00:34:29.643 14:04:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:29.643 14:04:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:29.643 rmmod nvme_tcp 00:34:29.643 rmmod nvme_fabrics 00:34:29.643 rmmod nvme_keyring 00:34:29.643 14:04:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:29.643 14:04:30 -- nvmf/common.sh@123 -- # set -e 00:34:29.643 14:04:30 -- nvmf/common.sh@124 -- # return 0 00:34:29.643 14:04:30 -- nvmf/common.sh@477 -- # '[' -n 1804800 ']' 00:34:29.643 14:04:30 -- nvmf/common.sh@478 -- # killprocess 1804800 00:34:29.643 14:04:30 -- common/autotest_common.sh@926 -- # '[' -z 1804800 ']' 00:34:29.644 14:04:30 -- common/autotest_common.sh@930 -- # kill -0 1804800 00:34:29.644 14:04:30 -- common/autotest_common.sh@931 -- # uname 00:34:29.644 14:04:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:29.644 14:04:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1804800 00:34:29.644 14:04:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:29.644 14:04:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:29.644 14:04:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1804800' 00:34:29.644 killing process with pid 1804800 00:34:29.644 14:04:30 -- common/autotest_common.sh@945 -- # kill 1804800 00:34:29.644 14:04:30 -- common/autotest_common.sh@950 -- # wait 1804800 00:34:29.644 14:04:30 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:34:29.644 14:04:30 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:30.581 Waiting for block devices as requested 00:34:30.841 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:30.841 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:30.841 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:31.102 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:31.102 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:31.102 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:31.102 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:31.406 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:31.406 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:31.406 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:31.406 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:31.406 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:31.664 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:31.664 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:31.664 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:31.923 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:31.923 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:31.923 14:04:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:31.923 14:04:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:31.923 14:04:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:31.923 14:04:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:31.923 14:04:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.923 14:04:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:31.923 14:04:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.459 14:04:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:34.459 00:34:34.459 real 1m12.615s 00:34:34.459 user 7m8.772s 00:34:34.459 sys 0m16.938s 00:34:34.459 14:04:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:34.459 14:04:36 -- common/autotest_common.sh@10 -- # set +x 00:34:34.459 ************************************ 00:34:34.459 END TEST nvmf_dif 00:34:34.459 ************************************ 00:34:34.459 14:04:36 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:34.459 14:04:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:34.459 14:04:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:34.459 14:04:36 -- common/autotest_common.sh@10 -- # set +x 00:34:34.459 ************************************ 00:34:34.459 START TEST nvmf_abort_qd_sizes 00:34:34.459 ************************************ 00:34:34.459 14:04:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:34.459 * Looking for test storage... 00:34:34.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:34.459 14:04:36 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.459 14:04:36 -- nvmf/common.sh@7 -- # uname -s 00:34:34.459 14:04:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.459 14:04:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.459 14:04:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.459 14:04:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.459 14:04:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.459 14:04:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.459 14:04:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.459 14:04:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.459 14:04:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.459 14:04:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.459 14:04:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:34.459 14:04:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:34.459 14:04:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.459 14:04:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.459 14:04:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.459 14:04:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.459 14:04:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.459 14:04:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.459 14:04:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.459 14:04:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.459 14:04:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.459 14:04:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.459 14:04:36 -- paths/export.sh@5 -- # export PATH 00:34:34.459 14:04:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.459 14:04:36 -- nvmf/common.sh@46 -- # : 0 00:34:34.459 14:04:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:34.459 14:04:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:34.459 14:04:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:34.459 14:04:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.459 14:04:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.459 14:04:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:34.459 14:04:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:34.459 14:04:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:34.459 14:04:36 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:34:34.459 14:04:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:34.459 14:04:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.459 14:04:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:34.459 14:04:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:34.459 14:04:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:34.459 14:04:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.459 14:04:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:34.459 14:04:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.459 14:04:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:34.459 14:04:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:34.459 14:04:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:34.459 14:04:36 -- common/autotest_common.sh@10 -- # set +x 00:34:39.732 14:04:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:39.732 14:04:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:39.732 14:04:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:39.732 14:04:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:39.732 14:04:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:39.732 14:04:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:39.732 14:04:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:39.732 14:04:41 -- nvmf/common.sh@294 -- # net_devs=() 00:34:39.732 14:04:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:39.732 14:04:41 -- nvmf/common.sh@295 -- # e810=() 00:34:39.732 14:04:41 -- nvmf/common.sh@295 -- # local -ga e810 00:34:39.732 14:04:41 -- nvmf/common.sh@296 -- # x722=() 00:34:39.732 14:04:41 -- nvmf/common.sh@296 -- # local -ga x722 00:34:39.733 14:04:41 -- nvmf/common.sh@297 -- # mlx=() 00:34:39.733 14:04:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:39.733 14:04:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:39.733 14:04:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:39.733 14:04:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:39.733 14:04:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:39.733 14:04:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:39.733 14:04:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:39.733 14:04:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:39.733 14:04:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:39.733 14:04:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:39.733 14:04:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:39.733 14:04:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:39.733 14:04:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:39.733 14:04:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:39.733 14:04:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:39.733 14:04:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:39.733 14:04:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:39.733 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:39.733 14:04:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:39.733 14:04:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:39.733 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:39.733 14:04:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:39.733 14:04:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:39.733 14:04:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.733 14:04:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:39.733 14:04:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.733 14:04:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:39.733 Found net devices under 0000:86:00.0: cvl_0_0 00:34:39.733 14:04:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.733 14:04:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:39.733 14:04:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.733 14:04:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:39.733 14:04:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.733 14:04:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:39.733 Found net devices under 0000:86:00.1: cvl_0_1 00:34:39.733 14:04:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.733 14:04:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:39.733 14:04:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:39.733 14:04:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:39.733 14:04:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:39.733 14:04:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:39.733 14:04:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:39.733 14:04:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:39.733 14:04:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:39.733 14:04:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:39.733 14:04:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:39.733 14:04:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:39.733 14:04:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:39.733 14:04:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:39.733 14:04:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:39.733 14:04:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:39.733 14:04:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:39.733 14:04:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:39.733 14:04:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:39.733 14:04:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:39.733 14:04:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:39.733 14:04:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:39.733 14:04:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:39.733 14:04:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:39.733 14:04:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:39.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:34:39.733 00:34:39.733 --- 10.0.0.2 ping statistics --- 00:34:39.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.733 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:34:39.733 14:04:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:39.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:34:39.733 00:34:39.733 --- 10.0.0.1 ping statistics --- 00:34:39.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.733 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:34:39.733 14:04:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.733 14:04:41 -- nvmf/common.sh@410 -- # return 0 00:34:39.733 14:04:41 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:39.733 14:04:41 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:42.268 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:42.268 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:43.206 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:43.206 14:04:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.206 14:04:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:43.206 14:04:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:43.206 14:04:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.206 14:04:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:43.206 14:04:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:43.206 14:04:45 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:34:43.206 14:04:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:43.206 14:04:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:43.206 14:04:45 -- common/autotest_common.sh@10 -- # set +x 00:34:43.206 14:04:45 -- nvmf/common.sh@469 -- # nvmfpid=1821403 00:34:43.206 14:04:45 -- nvmf/common.sh@470 -- # waitforlisten 1821403 00:34:43.206 14:04:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:43.206 14:04:45 -- common/autotest_common.sh@819 -- # '[' -z 1821403 ']' 00:34:43.206 14:04:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.206 14:04:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:43.206 14:04:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.206 14:04:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:43.207 14:04:45 -- common/autotest_common.sh@10 -- # set +x 00:34:43.207 [2024-07-11 14:04:45.653649] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:34:43.207 [2024-07-11 14:04:45.653693] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.466 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.466 [2024-07-11 14:04:45.711983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:43.466 [2024-07-11 14:04:45.752985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:43.466 [2024-07-11 14:04:45.753113] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.466 [2024-07-11 14:04:45.753121] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.466 [2024-07-11 14:04:45.753128] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.466 [2024-07-11 14:04:45.753173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.466 [2024-07-11 14:04:45.753240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:43.466 [2024-07-11 14:04:45.753261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:43.466 [2024-07-11 14:04:45.753263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.033 14:04:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:44.033 14:04:46 -- common/autotest_common.sh@852 -- # return 0 00:34:44.033 14:04:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:44.033 14:04:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:44.033 14:04:46 -- common/autotest_common.sh@10 -- # set +x 00:34:44.291 14:04:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:44.291 14:04:46 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:44.291 14:04:46 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:34:44.291 14:04:46 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:34:44.291 14:04:46 -- scripts/common.sh@311 -- # local bdf bdfs 00:34:44.291 14:04:46 -- scripts/common.sh@312 -- # local nvmes 00:34:44.291 14:04:46 -- scripts/common.sh@314 -- # [[ -n 0000:5e:00.0 ]] 00:34:44.291 14:04:46 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:44.291 14:04:46 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:34:44.291 14:04:46 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:44.291 14:04:46 -- scripts/common.sh@322 -- # uname -s 00:34:44.291 14:04:46 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:34:44.291 14:04:46 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:34:44.291 14:04:46 -- scripts/common.sh@327 -- # (( 1 )) 00:34:44.291 14:04:46 -- scripts/common.sh@328 -- # printf '%s\n' 0000:5e:00.0 00:34:44.291 14:04:46 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:34:44.291 14:04:46 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:5e:00.0 00:34:44.291 14:04:46 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:34:44.291 14:04:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:44.291 14:04:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:44.291 14:04:46 -- common/autotest_common.sh@10 -- # set +x 00:34:44.291 ************************************ 00:34:44.291 START TEST spdk_target_abort 00:34:44.291 ************************************ 00:34:44.291 14:04:46 -- common/autotest_common.sh@1104 -- # spdk_target 00:34:44.291 14:04:46 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:44.291 14:04:46 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:34:44.291 14:04:46 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:44.291 14:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:44.291 14:04:46 -- common/autotest_common.sh@10 -- # set +x 00:34:47.583 spdk_targetn1 00:34:47.583 14:04:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:47.583 14:04:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.583 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:34:47.583 [2024-07-11 14:04:49.337249] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:47.583 14:04:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:34:47.583 14:04:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.583 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:34:47.583 14:04:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:34:47.583 14:04:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.583 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:34:47.583 14:04:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:34:47.583 14:04:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.583 14:04:49 -- common/autotest_common.sh@10 -- # set +x 00:34:47.583 [2024-07-11 14:04:49.370152] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:47.583 14:04:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:47.583 14:04:49 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:47.583 EAL: No free 2048 kB hugepages reported on node 1 00:34:50.876 Initializing NVMe Controllers 00:34:50.876 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:50.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:50.876 Initialization complete. Launching workers. 00:34:50.876 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 15423, failed: 0 00:34:50.876 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1410, failed to submit 14013 00:34:50.876 success 783, unsuccess 627, failed 0 00:34:50.876 14:04:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:50.876 14:04:52 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:50.876 EAL: No free 2048 kB hugepages reported on node 1 00:34:53.408 [2024-07-11 14:04:55.784194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6309d0 is same with the state(5) to be set 00:34:53.408 [2024-07-11 14:04:55.784227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6309d0 is same with the state(5) to be set 00:34:53.408 [2024-07-11 14:04:55.784235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6309d0 is same with the state(5) to be set 00:34:53.408 [2024-07-11 14:04:55.784246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6309d0 is same with the state(5) to be set 00:34:53.667 Initializing NVMe Controllers 00:34:53.667 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:53.667 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:53.667 Initialization complete. Launching workers. 00:34:53.667 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8725, failed: 0 00:34:53.667 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1253, failed to submit 7472 00:34:53.667 success 312, unsuccess 941, failed 0 00:34:53.667 14:04:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:53.667 14:04:55 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:34:53.667 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.956 Initializing NVMe Controllers 00:34:56.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:34:56.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:34:56.956 Initialization complete. Launching workers. 00:34:56.956 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 38431, failed: 0 00:34:56.956 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2858, failed to submit 35573 00:34:56.956 success 607, unsuccess 2251, failed 0 00:34:56.956 14:04:59 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:34:56.956 14:04:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:56.956 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:34:56.956 14:04:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:56.956 14:04:59 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:56.956 14:04:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:56.956 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:34:57.891 14:05:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:57.891 14:05:00 -- target/abort_qd_sizes.sh@62 -- # killprocess 1821403 00:34:57.891 14:05:00 -- common/autotest_common.sh@926 -- # '[' -z 1821403 ']' 00:34:57.891 14:05:00 -- common/autotest_common.sh@930 -- # kill -0 1821403 00:34:57.891 14:05:00 -- common/autotest_common.sh@931 -- # uname 00:34:57.891 14:05:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:57.891 14:05:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1821403 00:34:58.191 14:05:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:58.191 14:05:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:58.191 14:05:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1821403' 00:34:58.191 killing process with pid 1821403 00:34:58.191 14:05:00 -- common/autotest_common.sh@945 -- # kill 1821403 00:34:58.191 14:05:00 -- common/autotest_common.sh@950 -- # wait 1821403 00:34:58.191 00:34:58.191 real 0m14.025s 00:34:58.191 user 0m55.881s 00:34:58.191 sys 0m2.323s 00:34:58.191 14:05:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:58.191 14:05:00 -- common/autotest_common.sh@10 -- # set +x 00:34:58.191 ************************************ 00:34:58.191 END TEST spdk_target_abort 00:34:58.192 ************************************ 00:34:58.192 14:05:00 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:34:58.192 14:05:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:58.192 14:05:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:58.192 14:05:00 -- common/autotest_common.sh@10 -- # set +x 00:34:58.192 ************************************ 00:34:58.192 START TEST kernel_target_abort 00:34:58.192 ************************************ 00:34:58.192 14:05:00 -- common/autotest_common.sh@1104 -- # kernel_target 00:34:58.192 14:05:00 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:34:58.192 14:05:00 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:34:58.192 14:05:00 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:34:58.192 14:05:00 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:34:58.192 14:05:00 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:34:58.192 14:05:00 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:34:58.192 14:05:00 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:58.192 14:05:00 -- nvmf/common.sh@627 -- # local block nvme 00:34:58.192 14:05:00 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:34:58.192 14:05:00 -- nvmf/common.sh@630 -- # modprobe nvmet 00:34:58.192 14:05:00 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:58.192 14:05:00 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:00.750 Waiting for block devices as requested 00:35:00.750 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:00.750 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:00.750 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:00.750 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:00.750 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:00.750 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:00.750 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:00.750 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:01.010 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:01.010 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:01.010 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:01.269 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:01.269 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:01.269 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:01.269 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:01.527 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:01.527 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:01.527 14:05:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:01.527 14:05:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:01.527 14:05:03 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:35:01.527 14:05:03 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:35:01.527 14:05:03 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:01.787 No valid GPT data, bailing 00:35:01.787 14:05:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:01.787 14:05:04 -- scripts/common.sh@393 -- # pt= 00:35:01.787 14:05:04 -- scripts/common.sh@394 -- # return 1 00:35:01.787 14:05:04 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:35:01.787 14:05:04 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:35:01.787 14:05:04 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:01.787 14:05:04 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:01.787 14:05:04 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:01.787 14:05:04 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:35:01.787 14:05:04 -- nvmf/common.sh@654 -- # echo 1 00:35:01.787 14:05:04 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:35:01.787 14:05:04 -- nvmf/common.sh@656 -- # echo 1 00:35:01.787 14:05:04 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:35:01.787 14:05:04 -- nvmf/common.sh@663 -- # echo tcp 00:35:01.787 14:05:04 -- nvmf/common.sh@664 -- # echo 4420 00:35:01.787 14:05:04 -- nvmf/common.sh@665 -- # echo ipv4 00:35:01.787 14:05:04 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:01.787 14:05:04 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:01.787 00:35:01.787 Discovery Log Number of Records 2, Generation counter 2 00:35:01.787 =====Discovery Log Entry 0====== 00:35:01.787 trtype: tcp 00:35:01.787 adrfam: ipv4 00:35:01.787 subtype: current discovery subsystem 00:35:01.787 treq: not specified, sq flow control disable supported 00:35:01.787 portid: 1 00:35:01.787 trsvcid: 4420 00:35:01.787 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:01.787 traddr: 10.0.0.1 00:35:01.787 eflags: none 00:35:01.787 sectype: none 00:35:01.787 =====Discovery Log Entry 1====== 00:35:01.787 trtype: tcp 00:35:01.787 adrfam: ipv4 00:35:01.787 subtype: nvme subsystem 00:35:01.787 treq: not specified, sq flow control disable supported 00:35:01.787 portid: 1 00:35:01.787 trsvcid: 4420 00:35:01.787 subnqn: kernel_target 00:35:01.787 traddr: 10.0.0.1 00:35:01.787 eflags: none 00:35:01.787 sectype: none 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:01.787 14:05:04 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:01.787 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.077 Initializing NVMe Controllers 00:35:05.077 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:05.077 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:05.078 Initialization complete. Launching workers. 00:35:05.078 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 77921, failed: 0 00:35:05.078 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 77921, failed to submit 0 00:35:05.078 success 0, unsuccess 77921, failed 0 00:35:05.078 14:05:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:05.078 14:05:07 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:05.078 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.364 Initializing NVMe Controllers 00:35:08.364 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:08.364 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:08.364 Initialization complete. Launching workers. 00:35:08.364 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 131426, failed: 0 00:35:08.364 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 33082, failed to submit 98344 00:35:08.364 success 0, unsuccess 33082, failed 0 00:35:08.364 14:05:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:08.364 14:05:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:08.364 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.651 Initializing NVMe Controllers 00:35:11.651 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:35:11.651 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:35:11.651 Initialization complete. Launching workers. 00:35:11.651 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 126906, failed: 0 00:35:11.651 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31742, failed to submit 95164 00:35:11.651 success 0, unsuccess 31742, failed 0 00:35:11.651 14:05:13 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:35:11.651 14:05:13 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:35:11.651 14:05:13 -- nvmf/common.sh@677 -- # echo 0 00:35:11.651 14:05:13 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:35:11.651 14:05:13 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:11.651 14:05:13 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:11.651 14:05:13 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:11.651 14:05:13 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:35:11.651 14:05:13 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:35:11.651 00:35:11.651 real 0m12.859s 00:35:11.651 user 0m6.523s 00:35:11.651 sys 0m2.832s 00:35:11.651 14:05:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:11.651 14:05:13 -- common/autotest_common.sh@10 -- # set +x 00:35:11.651 ************************************ 00:35:11.651 END TEST kernel_target_abort 00:35:11.651 ************************************ 00:35:11.651 14:05:13 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:35:11.651 14:05:13 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:35:11.651 14:05:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:11.651 14:05:13 -- nvmf/common.sh@116 -- # sync 00:35:11.651 14:05:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:11.651 14:05:13 -- nvmf/common.sh@119 -- # set +e 00:35:11.651 14:05:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:11.651 14:05:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:11.651 rmmod nvme_tcp 00:35:11.651 rmmod nvme_fabrics 00:35:11.651 rmmod nvme_keyring 00:35:11.651 14:05:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:11.651 14:05:13 -- nvmf/common.sh@123 -- # set -e 00:35:11.651 14:05:13 -- nvmf/common.sh@124 -- # return 0 00:35:11.651 14:05:13 -- nvmf/common.sh@477 -- # '[' -n 1821403 ']' 00:35:11.651 14:05:13 -- nvmf/common.sh@478 -- # killprocess 1821403 00:35:11.651 14:05:13 -- common/autotest_common.sh@926 -- # '[' -z 1821403 ']' 00:35:11.651 14:05:13 -- common/autotest_common.sh@930 -- # kill -0 1821403 00:35:11.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1821403) - No such process 00:35:11.651 14:05:13 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1821403 is not found' 00:35:11.651 Process with pid 1821403 is not found 00:35:11.651 14:05:13 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:11.651 14:05:13 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:14.187 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:14.187 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:14.187 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:14.187 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:14.187 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:14.187 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:14.187 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:14.187 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:14.187 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:14.187 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:14.188 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:14.188 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:14.188 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:14.188 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:14.188 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:14.188 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:14.188 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:14.188 14:05:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:14.188 14:05:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:14.188 14:05:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:14.188 14:05:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:14.188 14:05:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.188 14:05:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:14.188 14:05:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.093 14:05:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:16.093 00:35:16.093 real 0m42.045s 00:35:16.093 user 1m6.571s 00:35:16.093 sys 0m12.975s 00:35:16.093 14:05:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:16.093 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:35:16.093 ************************************ 00:35:16.093 END TEST nvmf_abort_qd_sizes 00:35:16.093 ************************************ 00:35:16.093 14:05:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:16.093 14:05:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:16.093 14:05:18 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:16.093 14:05:18 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:16.093 14:05:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:16.093 14:05:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:16.093 14:05:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:16.093 14:05:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:16.093 14:05:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:16.093 14:05:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:16.093 14:05:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:16.093 14:05:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:16.093 14:05:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:16.093 14:05:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:16.093 14:05:18 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:35:16.093 14:05:18 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:35:16.093 14:05:18 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:35:16.093 14:05:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:16.093 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:35:16.093 14:05:18 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:35:16.093 14:05:18 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:35:16.093 14:05:18 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:35:16.093 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:35:20.284 INFO: APP EXITING 00:35:20.284 INFO: killing all VMs 00:35:20.284 INFO: killing vhost app 00:35:20.284 INFO: EXIT DONE 00:35:22.822 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:22.822 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:22.822 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:25.357 Cleaning 00:35:25.357 Removing: /var/run/dpdk/spdk0/config 00:35:25.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:25.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:25.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:25.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:25.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:25.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:25.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:25.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:25.357 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:25.357 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:25.358 Removing: /var/run/dpdk/spdk1/config 00:35:25.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:25.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:25.619 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:25.619 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:25.619 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:25.619 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:25.619 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:25.619 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:25.619 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:25.619 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:25.619 Removing: /var/run/dpdk/spdk1/mp_socket 00:35:25.619 Removing: /var/run/dpdk/spdk2/config 00:35:25.619 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:25.619 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:25.619 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:25.619 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:25.619 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:25.619 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:25.619 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:25.619 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:25.619 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:25.619 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:25.619 Removing: /var/run/dpdk/spdk3/config 00:35:25.619 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:25.619 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:25.619 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:25.619 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:25.619 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:25.619 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:25.619 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:25.619 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:25.619 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:25.619 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:25.619 Removing: /var/run/dpdk/spdk4/config 00:35:25.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:25.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:25.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:25.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:25.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:25.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:25.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:25.619 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:25.619 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:25.619 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:25.619 Removing: /dev/shm/bdev_svc_trace.1 00:35:25.619 Removing: /dev/shm/nvmf_trace.0 00:35:25.619 Removing: /dev/shm/spdk_tgt_trace.pid1417798 00:35:25.619 Removing: /var/run/dpdk/spdk0 00:35:25.619 Removing: /var/run/dpdk/spdk1 00:35:25.619 Removing: /var/run/dpdk/spdk2 00:35:25.619 Removing: /var/run/dpdk/spdk3 00:35:25.619 Removing: /var/run/dpdk/spdk4 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1415499 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1416718 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1417798 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1418464 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1419991 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1421286 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1421562 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1421858 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1422159 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1422450 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1422701 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1422949 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1423226 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1423975 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1427004 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1427272 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1427538 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1427767 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1428048 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1428282 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1428630 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1428790 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1429048 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1429286 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1429399 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1429562 00:35:25.619 Removing: /var/run/dpdk/spdk_pid1430113 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1430288 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1430555 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1430776 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1430946 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1431008 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1431244 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1431491 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1431731 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1431980 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1432216 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1432466 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1432706 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1432954 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1433188 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1433435 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1433608 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1433830 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1434008 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1434241 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1434438 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1434681 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1434921 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1435168 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1435405 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1435652 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1435892 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1436143 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1436377 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1436633 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1436865 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1437115 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1437347 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1437607 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1437839 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1438088 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1438320 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1438573 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1438813 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1439056 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1439260 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1439495 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1439685 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1439916 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1440104 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1440347 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1440600 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1440741 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1444335 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1525636 00:35:25.915 Removing: /var/run/dpdk/spdk_pid1529907 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1540606 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1545845 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1549852 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1550553 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1556593 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1556597 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1557533 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1558323 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1559177 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1559865 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1559873 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1560109 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1560121 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1560197 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1561057 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1561982 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1562912 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1563392 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1563433 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1563755 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1564892 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1566101 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1574304 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1574670 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1579359 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1585196 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1587792 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1597925 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1606881 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1608530 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1609474 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1626300 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1630518 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1634825 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1636458 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1638328 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1638573 00:35:25.916 Removing: /var/run/dpdk/spdk_pid1638809 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1639051 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1639572 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1641464 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1642453 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1642956 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1648498 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1654045 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1658927 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1695645 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1699629 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1705664 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1707065 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1708954 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1713172 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1717226 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1724633 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1724635 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1729377 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1729519 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1729744 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1730102 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1730109 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1731530 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1733207 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1734974 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1736647 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1738276 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1739917 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1745822 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1746400 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1748172 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1749232 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1755520 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1758280 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1763544 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1769359 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1774878 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1775359 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1776054 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1776543 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1777226 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1777780 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1778484 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1779066 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1783255 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1783493 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1789468 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1789658 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1791908 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1800006 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1800013 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1805076 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1807057 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1809030 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1810134 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1812129 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1813205 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1822039 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1822519 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1823171 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1825284 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1825789 00:35:26.175 Removing: /var/run/dpdk/spdk_pid1826359 00:35:26.175 Clean 00:35:26.435 killing process with pid 1370553 00:35:34.552 killing process with pid 1370550 00:35:34.552 killing process with pid 1370552 00:35:34.552 killing process with pid 1370551 00:35:34.552 14:05:36 -- common/autotest_common.sh@1436 -- # return 0 00:35:34.552 14:05:36 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:35:34.552 14:05:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:34.552 14:05:36 -- common/autotest_common.sh@10 -- # set +x 00:35:34.552 14:05:36 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:35:34.552 14:05:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:34.552 14:05:36 -- common/autotest_common.sh@10 -- # set +x 00:35:34.552 14:05:36 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:34.552 14:05:36 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:34.552 14:05:36 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:34.552 14:05:36 -- spdk/autotest.sh@394 -- # hash lcov 00:35:34.552 14:05:36 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:34.552 14:05:36 -- spdk/autotest.sh@396 -- # hostname 00:35:34.552 14:05:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:34.552 geninfo: WARNING: invalid characters removed from testname! 00:35:56.489 14:05:55 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:56.489 14:05:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:57.058 14:05:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:58.958 14:06:01 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:00.857 14:06:03 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:02.760 14:06:04 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:04.138 14:06:06 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:04.398 14:06:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:04.398 14:06:06 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:04.398 14:06:06 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.398 14:06:06 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.398 14:06:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.398 14:06:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.398 14:06:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.398 14:06:06 -- paths/export.sh@5 -- $ export PATH 00:36:04.398 14:06:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.398 14:06:06 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:04.398 14:06:06 -- common/autobuild_common.sh@435 -- $ date +%s 00:36:04.398 14:06:06 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720699566.XXXXXX 00:36:04.398 14:06:06 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720699566.RPOnuW 00:36:04.398 14:06:06 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:36:04.398 14:06:06 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:36:04.398 14:06:06 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:36:04.398 14:06:06 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:36:04.398 14:06:06 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:04.398 14:06:06 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:04.398 14:06:06 -- common/autobuild_common.sh@451 -- $ get_config_params 00:36:04.398 14:06:06 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:36:04.398 14:06:06 -- common/autotest_common.sh@10 -- $ set +x 00:36:04.398 14:06:06 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:36:04.398 14:06:06 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:36:04.398 14:06:06 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:04.398 14:06:06 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:36:04.398 14:06:06 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:36:04.398 14:06:06 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:36:04.398 14:06:06 -- spdk/autopackage.sh@19 -- $ timing_finish 00:36:04.398 14:06:06 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:04.398 14:06:06 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:36:04.398 14:06:06 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:04.398 14:06:06 -- spdk/autopackage.sh@20 -- $ exit 0 00:36:04.398 + [[ -n 1315605 ]] 00:36:04.398 + sudo kill 1315605 00:36:04.409 [Pipeline] } 00:36:04.429 [Pipeline] // stage 00:36:04.436 [Pipeline] } 00:36:04.455 [Pipeline] // timeout 00:36:04.461 [Pipeline] } 00:36:04.479 [Pipeline] // catchError 00:36:04.484 [Pipeline] } 00:36:04.503 [Pipeline] // wrap 00:36:04.509 [Pipeline] } 00:36:04.527 [Pipeline] // catchError 00:36:04.537 [Pipeline] stage 00:36:04.539 [Pipeline] { (Epilogue) 00:36:04.555 [Pipeline] catchError 00:36:04.557 [Pipeline] { 00:36:04.572 [Pipeline] echo 00:36:04.574 Cleanup processes 00:36:04.580 [Pipeline] sh 00:36:04.867 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:04.867 1839395 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:04.883 [Pipeline] sh 00:36:05.168 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:05.168 ++ grep -v 'sudo pgrep' 00:36:05.168 ++ awk '{print $1}' 00:36:05.168 + sudo kill -9 00:36:05.168 + true 00:36:05.247 [Pipeline] sh 00:36:05.551 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:17.779 [Pipeline] sh 00:36:18.063 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:18.063 Artifacts sizes are good 00:36:18.077 [Pipeline] archiveArtifacts 00:36:18.084 Archiving artifacts 00:36:18.290 [Pipeline] sh 00:36:18.576 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:18.594 [Pipeline] cleanWs 00:36:18.604 [WS-CLEANUP] Deleting project workspace... 00:36:18.604 [WS-CLEANUP] Deferred wipeout is used... 00:36:18.610 [WS-CLEANUP] done 00:36:18.613 [Pipeline] } 00:36:18.633 [Pipeline] // catchError 00:36:18.645 [Pipeline] sh 00:36:18.925 + logger -p user.info -t JENKINS-CI 00:36:18.934 [Pipeline] } 00:36:18.950 [Pipeline] // stage 00:36:18.955 [Pipeline] } 00:36:18.972 [Pipeline] // node 00:36:18.978 [Pipeline] End of Pipeline 00:36:19.010 Finished: SUCCESS